00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 1060 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3722 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.145 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.146 The recommended git tool is: git 00:00:00.146 using credential 00000000-0000-0000-0000-000000000002 00:00:00.149 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.165 Fetching changes from the remote Git repository 00:00:00.166 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.185 Using shallow fetch with depth 1 00:00:00.185 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.185 > git --version # timeout=10 00:00:00.210 > git --version # 'git version 2.39.2' 00:00:00.210 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.243 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.243 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.428 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.439 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.453 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.453 > git config core.sparsecheckout # timeout=10 00:00:04.463 > git read-tree -mu HEAD # timeout=10 00:00:04.477 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.497 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.497 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.576 [Pipeline] Start of Pipeline 00:00:04.588 [Pipeline] library 00:00:04.589 Loading library shm_lib@master 00:00:04.590 Library shm_lib@master is cached. Copying from home. 00:00:04.600 [Pipeline] node 00:00:04.628 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.629 [Pipeline] { 00:00:04.640 [Pipeline] catchError 00:00:04.641 [Pipeline] { 00:00:04.653 [Pipeline] wrap 00:00:04.661 [Pipeline] { 00:00:04.668 [Pipeline] stage 00:00:04.669 [Pipeline] { (Prologue) 00:00:04.877 [Pipeline] sh 00:00:05.677 + logger -p user.info -t JENKINS-CI 00:00:05.706 [Pipeline] echo 00:00:05.708 Node: WFP4 00:00:05.717 [Pipeline] sh 00:00:06.058 [Pipeline] setCustomBuildProperty 00:00:06.066 [Pipeline] echo 00:00:06.067 Cleanup processes 00:00:06.070 [Pipeline] sh 00:00:06.357 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.357 5535 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.371 [Pipeline] sh 00:00:06.660 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.660 ++ grep -v 'sudo pgrep' 00:00:06.660 ++ awk '{print $1}' 00:00:06.660 + sudo kill -9 00:00:06.660 + true 00:00:06.674 [Pipeline] cleanWs 00:00:06.683 [WS-CLEANUP] Deleting project workspace... 00:00:06.683 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.695 [WS-CLEANUP] done 00:00:06.698 [Pipeline] setCustomBuildProperty 00:00:06.710 [Pipeline] sh 00:00:06.995 + sudo git config --global --replace-all safe.directory '*' 00:00:07.106 [Pipeline] httpRequest 00:00:08.773 [Pipeline] echo 00:00:08.775 Sorcerer 10.211.164.20 is alive 00:00:08.783 [Pipeline] retry 00:00:08.785 [Pipeline] { 00:00:08.798 [Pipeline] httpRequest 00:00:08.802 HttpMethod: GET 00:00:08.803 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.803 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.819 Response Code: HTTP/1.1 200 OK 00:00:08.819 Success: Status code 200 is in the accepted range: 200,404 00:00:08.820 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.335 [Pipeline] } 00:00:15.352 [Pipeline] // retry 00:00:15.360 [Pipeline] sh 00:00:15.648 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.664 [Pipeline] httpRequest 00:00:16.063 [Pipeline] echo 00:00:16.065 Sorcerer 10.211.164.20 is alive 00:00:16.075 [Pipeline] retry 00:00:16.076 [Pipeline] { 00:00:16.090 [Pipeline] httpRequest 00:00:16.095 HttpMethod: GET 00:00:16.096 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:16.097 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:16.119 Response Code: HTTP/1.1 200 OK 00:00:16.119 Success: Status code 200 is in the accepted range: 200,404 00:00:16.119 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:20.862 [Pipeline] } 00:01:20.874 [Pipeline] // retry 00:01:20.881 [Pipeline] sh 00:01:21.172 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:23.727 [Pipeline] sh 00:01:24.017 + git -C spdk log --oneline -n5 00:01:24.017 e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:24.017 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:24.017 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:24.017 66289a6db build: use VERSION file for storing version 00:01:24.017 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:24.035 [Pipeline] withCredentials 00:01:24.046 > git --version # timeout=10 00:01:24.060 > git --version # 'git version 2.39.2' 00:01:24.086 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:24.088 [Pipeline] { 00:01:24.097 [Pipeline] retry 00:01:24.099 [Pipeline] { 00:01:24.114 [Pipeline] sh 00:01:24.653 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:24.927 [Pipeline] } 00:01:24.944 [Pipeline] // retry 00:01:24.949 [Pipeline] } 00:01:24.965 [Pipeline] // withCredentials 00:01:24.973 [Pipeline] httpRequest 00:01:25.383 [Pipeline] echo 00:01:25.384 Sorcerer 10.211.164.20 is alive 00:01:25.393 [Pipeline] retry 00:01:25.395 [Pipeline] { 00:01:25.408 [Pipeline] httpRequest 00:01:25.412 HttpMethod: GET 00:01:25.413 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:25.414 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:25.420 Response Code: HTTP/1.1 200 OK 00:01:25.421 Success: Status code 200 is in the accepted range: 200,404 00:01:25.421 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:07.364 [Pipeline] } 00:02:07.381 [Pipeline] // retry 00:02:07.388 [Pipeline] sh 00:02:07.679 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:09.075 [Pipeline] sh 00:02:09.364 + git -C dpdk log --oneline -n5 00:02:09.364 eeb0605f11 version: 23.11.0 00:02:09.364 238778122a doc: update release notes for 23.11 00:02:09.364 46aa6b3cfc doc: fix description of RSS features 00:02:09.364 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:09.364 7e421ae345 devtools: support skipping forbid rule check 00:02:09.375 [Pipeline] } 00:02:09.389 [Pipeline] // stage 00:02:09.398 [Pipeline] stage 00:02:09.400 [Pipeline] { (Prepare) 00:02:09.419 [Pipeline] writeFile 00:02:09.434 [Pipeline] sh 00:02:09.725 + logger -p user.info -t JENKINS-CI 00:02:09.738 [Pipeline] sh 00:02:10.024 + logger -p user.info -t JENKINS-CI 00:02:10.036 [Pipeline] sh 00:02:10.323 + cat autorun-spdk.conf 00:02:10.323 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.323 SPDK_TEST_NVMF=1 00:02:10.323 SPDK_TEST_NVME_CLI=1 00:02:10.323 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.323 SPDK_TEST_NVMF_NICS=e810 00:02:10.323 SPDK_TEST_VFIOUSER=1 00:02:10.323 SPDK_RUN_UBSAN=1 00:02:10.323 NET_TYPE=phy 00:02:10.323 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:10.323 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:10.330 RUN_NIGHTLY=1 00:02:10.333 [Pipeline] readFile 00:02:10.361 [Pipeline] withEnv 00:02:10.362 [Pipeline] { 00:02:10.368 [Pipeline] sh 00:02:10.650 + set -ex 00:02:10.650 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:10.650 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:10.650 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.650 ++ SPDK_TEST_NVMF=1 00:02:10.651 ++ SPDK_TEST_NVME_CLI=1 00:02:10.651 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.651 ++ SPDK_TEST_NVMF_NICS=e810 00:02:10.651 ++ SPDK_TEST_VFIOUSER=1 00:02:10.651 ++ SPDK_RUN_UBSAN=1 00:02:10.651 ++ NET_TYPE=phy 00:02:10.651 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:10.651 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:10.651 ++ RUN_NIGHTLY=1 00:02:10.651 + case $SPDK_TEST_NVMF_NICS in 00:02:10.651 + DRIVERS=ice 00:02:10.651 + [[ tcp == \r\d\m\a ]] 00:02:10.651 + [[ -n ice ]] 00:02:10.651 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:10.651 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:10.651 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:10.651 rmmod: ERROR: Module i40iw is not currently loaded 00:02:10.651 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:10.651 + true 00:02:10.651 + for D in $DRIVERS 00:02:10.651 + sudo modprobe ice 00:02:10.651 + exit 0 00:02:10.660 [Pipeline] } 00:02:10.675 [Pipeline] // withEnv 00:02:10.680 [Pipeline] } 00:02:10.694 [Pipeline] // stage 00:02:10.702 [Pipeline] catchError 00:02:10.704 [Pipeline] { 00:02:10.718 [Pipeline] timeout 00:02:10.718 Timeout set to expire in 1 hr 0 min 00:02:10.720 [Pipeline] { 00:02:10.734 [Pipeline] stage 00:02:10.736 [Pipeline] { (Tests) 00:02:10.750 [Pipeline] sh 00:02:11.040 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:11.040 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:11.040 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:11.040 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:11.040 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.040 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:11.040 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:11.040 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:11.040 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:11.040 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:11.040 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:11.040 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:11.040 + source /etc/os-release 00:02:11.040 ++ NAME='Fedora Linux' 00:02:11.040 ++ VERSION='39 (Cloud Edition)' 00:02:11.040 ++ ID=fedora 00:02:11.040 ++ VERSION_ID=39 00:02:11.040 ++ VERSION_CODENAME= 00:02:11.040 ++ PLATFORM_ID=platform:f39 00:02:11.040 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:11.040 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:11.040 ++ LOGO=fedora-logo-icon 00:02:11.040 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:11.040 ++ HOME_URL=https://fedoraproject.org/ 00:02:11.040 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:11.040 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:11.040 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:11.040 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:11.040 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:11.040 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:11.040 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:11.040 ++ SUPPORT_END=2024-11-12 00:02:11.040 ++ VARIANT='Cloud Edition' 00:02:11.040 ++ VARIANT_ID=cloud 00:02:11.040 + uname -a 00:02:11.040 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:02:11.040 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:13.585 Hugepages 00:02:13.585 node hugesize free / total 00:02:13.585 node0 1048576kB 0 / 0 00:02:13.585 node0 2048kB 0 / 0 00:02:13.585 node1 1048576kB 0 / 0 00:02:13.585 node1 2048kB 0 / 0 00:02:13.585 00:02:13.585 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:13.585 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:13.585 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:13.585 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:13.585 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:13.585 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:13.585 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:13.585 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:13.585 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:13.585 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:13.585 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:13.585 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:13.585 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:13.585 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:13.585 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:13.585 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:13.585 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:13.585 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:13.585 + rm -f /tmp/spdk-ld-path 00:02:13.585 + source autorun-spdk.conf 00:02:13.585 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:13.585 ++ SPDK_TEST_NVMF=1 00:02:13.585 ++ SPDK_TEST_NVME_CLI=1 00:02:13.585 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:13.585 ++ SPDK_TEST_NVMF_NICS=e810 00:02:13.585 ++ SPDK_TEST_VFIOUSER=1 00:02:13.585 ++ SPDK_RUN_UBSAN=1 00:02:13.585 ++ NET_TYPE=phy 00:02:13.585 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:13.585 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:13.585 ++ RUN_NIGHTLY=1 00:02:13.585 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:13.585 + [[ -n '' ]] 00:02:13.585 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:13.585 + for M in /var/spdk/build-*-manifest.txt 00:02:13.585 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:13.585 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:13.585 + for M in /var/spdk/build-*-manifest.txt 00:02:13.585 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:13.585 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:13.585 + for M in /var/spdk/build-*-manifest.txt 00:02:13.585 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:13.585 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:13.585 ++ uname 00:02:13.585 + [[ Linux == \L\i\n\u\x ]] 00:02:13.585 + sudo dmesg -T 00:02:13.585 + sudo dmesg --clear 00:02:13.585 + dmesg_pid=7011 00:02:13.585 + [[ Fedora Linux == FreeBSD ]] 00:02:13.585 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:13.585 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:13.585 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:13.585 + sudo dmesg -Tw 00:02:13.585 + [[ -x /usr/src/fio-static/fio ]] 00:02:13.585 + export FIO_BIN=/usr/src/fio-static/fio 00:02:13.585 + FIO_BIN=/usr/src/fio-static/fio 00:02:13.585 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:13.585 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:13.585 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:13.585 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:13.585 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:13.585 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:13.585 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:13.585 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:13.585 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:13.846 02:43:28 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:13.846 02:43:28 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:13.846 02:43:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:13.846 02:43:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:13.846 02:43:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:13.846 02:43:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:13.846 02:43:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:13.846 02:43:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:13.846 02:43:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:13.846 02:43:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:13.846 02:43:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:13.846 02:43:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:13.846 02:43:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:02:13.846 02:43:28 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:13.846 02:43:28 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:13.846 02:43:28 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:13.846 02:43:28 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:13.846 02:43:28 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:13.846 02:43:28 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:13.846 02:43:28 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:13.847 02:43:28 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:13.847 02:43:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.847 02:43:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.847 02:43:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.847 02:43:28 -- paths/export.sh@5 -- $ export PATH 00:02:13.847 02:43:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.847 02:43:28 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:13.847 02:43:28 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:13.847 02:43:28 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734140608.XXXXXX 00:02:13.847 02:43:28 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734140608.igN8g5 00:02:13.847 02:43:28 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:13.847 02:43:28 -- common/autobuild_common.sh@499 -- $ '[' -n v23.11 ']' 00:02:13.847 02:43:28 -- common/autobuild_common.sh@500 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:13.847 02:43:28 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:13.847 02:43:28 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:13.847 02:43:28 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:13.847 02:43:28 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:13.847 02:43:28 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:13.847 02:43:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.847 02:43:28 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:13.847 02:43:28 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:13.847 02:43:28 -- pm/common@17 -- $ local monitor 00:02:13.847 02:43:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.847 02:43:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.847 02:43:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.847 02:43:28 -- pm/common@21 -- $ date +%s 00:02:13.847 02:43:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.847 02:43:28 -- pm/common@21 -- $ date +%s 00:02:13.847 02:43:28 -- pm/common@25 -- $ sleep 1 00:02:13.847 02:43:28 -- pm/common@21 -- $ date +%s 00:02:13.847 02:43:28 -- pm/common@21 -- $ date +%s 00:02:13.847 02:43:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734140608 00:02:13.847 02:43:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734140608 00:02:13.847 02:43:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734140608 00:02:13.847 02:43:28 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734140608 00:02:13.847 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734140608_collect-cpu-temp.pm.log 00:02:13.847 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734140608_collect-cpu-load.pm.log 00:02:13.847 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734140608_collect-vmstat.pm.log 00:02:13.847 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734140608_collect-bmc-pm.bmc.pm.log 00:02:14.826 02:43:29 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:14.826 02:43:29 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:14.826 02:43:29 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:14.826 02:43:29 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:14.826 02:43:29 -- spdk/autobuild.sh@16 -- $ date -u 00:02:14.826 Sat Dec 14 01:43:29 AM UTC 2024 00:02:14.826 02:43:29 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:14.826 v25.01-rc1-2-ge01cb43b8 00:02:14.826 02:43:29 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:14.826 02:43:29 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:14.826 02:43:29 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:14.826 02:43:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:14.826 02:43:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:14.826 02:43:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.826 ************************************ 00:02:14.826 START TEST ubsan 00:02:14.826 ************************************ 00:02:14.826 02:43:29 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:14.826 using ubsan 00:02:14.826 00:02:14.826 real 0m0.000s 00:02:14.826 user 0m0.000s 00:02:14.826 sys 0m0.000s 00:02:14.826 02:43:29 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:14.826 02:43:29 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:14.826 ************************************ 00:02:14.826 END TEST ubsan 00:02:14.827 ************************************ 00:02:14.827 02:43:29 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:14.827 02:43:29 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:14.827 02:43:29 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:14.827 02:43:29 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:14.827 02:43:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:14.827 02:43:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.087 ************************************ 00:02:15.087 START TEST build_native_dpdk 00:02:15.087 ************************************ 00:02:15.087 02:43:29 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:15.087 02:43:29 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:15.087 02:43:29 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:15.087 02:43:29 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:15.087 02:43:29 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:15.087 02:43:29 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:15.087 02:43:29 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:15.087 02:43:29 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:15.087 02:43:29 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:15.087 02:43:29 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:15.087 02:43:29 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:15.087 02:43:29 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:15.087 02:43:29 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:15.087 eeb0605f11 version: 23.11.0 00:02:15.087 238778122a doc: update release notes for 23.11 00:02:15.087 46aa6b3cfc doc: fix description of RSS features 00:02:15.087 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:15.087 7e421ae345 devtools: support skipping forbid rule check 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:15.087 02:43:30 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 21.11.0 00:02:15.087 02:43:30 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:15.087 02:43:30 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:15.087 02:43:30 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:15.087 02:43:30 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:15.087 02:43:30 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:15.087 02:43:30 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:15.087 02:43:30 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:15.087 02:43:30 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:15.087 02:43:30 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:15.087 02:43:30 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:15.087 02:43:30 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:15.087 02:43:30 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:15.087 02:43:30 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:15.087 02:43:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:15.087 02:43:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:15.087 02:43:30 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:15.087 02:43:30 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:15.087 02:43:30 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:15.087 02:43:30 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:15.087 02:43:30 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:15.087 02:43:30 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:15.087 02:43:30 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:15.087 02:43:30 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:15.088 02:43:30 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:15.088 patching file config/rte_config.h 00:02:15.088 Hunk #1 succeeded at 60 (offset 1 line). 00:02:15.088 02:43:30 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 23.11.0 24.07.0 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:15.088 02:43:30 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:15.088 patching file lib/pcapng/rte_pcapng.c 00:02:15.088 02:43:30 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 23.11.0 24.07.0 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:15.088 02:43:30 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:15.088 02:43:30 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:15.088 02:43:30 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:15.088 02:43:30 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:15.088 02:43:30 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:15.088 02:43:30 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:21.668 The Meson build system 00:02:21.668 Version: 1.5.0 00:02:21.668 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:21.668 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:21.668 Build type: native build 00:02:21.668 Program cat found: YES (/usr/bin/cat) 00:02:21.668 Project name: DPDK 00:02:21.668 Project version: 23.11.0 00:02:21.668 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:21.668 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:21.668 Host machine cpu family: x86_64 00:02:21.668 Host machine cpu: x86_64 00:02:21.668 Message: ## Building in Developer Mode ## 00:02:21.668 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:21.668 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:21.668 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:21.668 Program python3 found: YES (/usr/bin/python3) 00:02:21.668 Program cat found: YES (/usr/bin/cat) 00:02:21.668 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:21.668 Compiler for C supports arguments -march=native: YES 00:02:21.668 Checking for size of "void *" : 8 00:02:21.668 Checking for size of "void *" : 8 (cached) 00:02:21.668 Library m found: YES 00:02:21.668 Library numa found: YES 00:02:21.668 Has header "numaif.h" : YES 00:02:21.668 Library fdt found: NO 00:02:21.668 Library execinfo found: NO 00:02:21.668 Has header "execinfo.h" : YES 00:02:21.668 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:21.668 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:21.668 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:21.668 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:21.668 Run-time dependency openssl found: YES 3.1.1 00:02:21.668 Run-time dependency libpcap found: YES 1.10.4 00:02:21.668 Has header "pcap.h" with dependency libpcap: YES 00:02:21.668 Compiler for C supports arguments -Wcast-qual: YES 00:02:21.668 Compiler for C supports arguments -Wdeprecated: YES 00:02:21.668 Compiler for C supports arguments -Wformat: YES 00:02:21.668 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:21.668 Compiler for C supports arguments -Wformat-security: NO 00:02:21.668 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:21.668 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:21.668 Compiler for C supports arguments -Wnested-externs: YES 00:02:21.668 Compiler for C supports arguments -Wold-style-definition: YES 00:02:21.668 Compiler for C supports arguments -Wpointer-arith: YES 00:02:21.668 Compiler for C supports arguments -Wsign-compare: YES 00:02:21.668 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:21.668 Compiler for C supports arguments -Wundef: YES 00:02:21.668 Compiler for C supports arguments -Wwrite-strings: YES 00:02:21.668 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:21.668 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:21.668 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:21.668 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:21.668 Program objdump found: YES (/usr/bin/objdump) 00:02:21.668 Compiler for C supports arguments -mavx512f: YES 00:02:21.668 Checking if "AVX512 checking" compiles: YES 00:02:21.668 Fetching value of define "__SSE4_2__" : 1 00:02:21.668 Fetching value of define "__AES__" : 1 00:02:21.668 Fetching value of define "__AVX__" : 1 00:02:21.668 Fetching value of define "__AVX2__" : 1 00:02:21.668 Fetching value of define "__AVX512BW__" : 1 00:02:21.668 Fetching value of define "__AVX512CD__" : 1 00:02:21.668 Fetching value of define "__AVX512DQ__" : 1 00:02:21.668 Fetching value of define "__AVX512F__" : 1 00:02:21.668 Fetching value of define "__AVX512VL__" : 1 00:02:21.668 Fetching value of define "__PCLMUL__" : 1 00:02:21.668 Fetching value of define "__RDRND__" : 1 00:02:21.668 Fetching value of define "__RDSEED__" : 1 00:02:21.668 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:21.668 Fetching value of define "__znver1__" : (undefined) 00:02:21.668 Fetching value of define "__znver2__" : (undefined) 00:02:21.668 Fetching value of define "__znver3__" : (undefined) 00:02:21.668 Fetching value of define "__znver4__" : (undefined) 00:02:21.668 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:21.668 Message: lib/log: Defining dependency "log" 00:02:21.668 Message: lib/kvargs: Defining dependency "kvargs" 00:02:21.668 Message: lib/telemetry: Defining dependency "telemetry" 00:02:21.668 Checking for function "getentropy" : NO 00:02:21.668 Message: lib/eal: Defining dependency "eal" 00:02:21.668 Message: lib/ring: Defining dependency "ring" 00:02:21.668 Message: lib/rcu: Defining dependency "rcu" 00:02:21.668 Message: lib/mempool: Defining dependency "mempool" 00:02:21.668 Message: lib/mbuf: Defining dependency "mbuf" 00:02:21.668 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:21.668 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:21.668 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:21.668 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:21.668 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:21.668 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:21.668 Compiler for C supports arguments -mpclmul: YES 00:02:21.668 Compiler for C supports arguments -maes: YES 00:02:21.668 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:21.668 Compiler for C supports arguments -mavx512bw: YES 00:02:21.668 Compiler for C supports arguments -mavx512dq: YES 00:02:21.668 Compiler for C supports arguments -mavx512vl: YES 00:02:21.668 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:21.668 Compiler for C supports arguments -mavx2: YES 00:02:21.668 Compiler for C supports arguments -mavx: YES 00:02:21.668 Message: lib/net: Defining dependency "net" 00:02:21.668 Message: lib/meter: Defining dependency "meter" 00:02:21.668 Message: lib/ethdev: Defining dependency "ethdev" 00:02:21.668 Message: lib/pci: Defining dependency "pci" 00:02:21.668 Message: lib/cmdline: Defining dependency "cmdline" 00:02:21.668 Message: lib/metrics: Defining dependency "metrics" 00:02:21.668 Message: lib/hash: Defining dependency "hash" 00:02:21.668 Message: lib/timer: Defining dependency "timer" 00:02:21.668 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:21.668 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:21.668 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:21.668 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:21.668 Message: lib/acl: Defining dependency "acl" 00:02:21.668 Message: lib/bbdev: Defining dependency "bbdev" 00:02:21.668 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:21.669 Run-time dependency libelf found: YES 0.191 00:02:21.669 Message: lib/bpf: Defining dependency "bpf" 00:02:21.669 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:21.669 Message: lib/compressdev: Defining dependency "compressdev" 00:02:21.669 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:21.669 Message: lib/distributor: Defining dependency "distributor" 00:02:21.669 Message: lib/dmadev: Defining dependency "dmadev" 00:02:21.669 Message: lib/efd: Defining dependency "efd" 00:02:21.669 Message: lib/eventdev: Defining dependency "eventdev" 00:02:21.669 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:21.669 Message: lib/gpudev: Defining dependency "gpudev" 00:02:21.669 Message: lib/gro: Defining dependency "gro" 00:02:21.669 Message: lib/gso: Defining dependency "gso" 00:02:21.669 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:21.669 Message: lib/jobstats: Defining dependency "jobstats" 00:02:21.669 Message: lib/latencystats: Defining dependency "latencystats" 00:02:21.669 Message: lib/lpm: Defining dependency "lpm" 00:02:21.669 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:21.669 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:21.669 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:21.669 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:21.669 Message: lib/member: Defining dependency "member" 00:02:21.669 Message: lib/pcapng: Defining dependency "pcapng" 00:02:21.669 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:21.669 Message: lib/power: Defining dependency "power" 00:02:21.669 Message: lib/rawdev: Defining dependency "rawdev" 00:02:21.669 Message: lib/regexdev: Defining dependency "regexdev" 00:02:21.669 Message: lib/mldev: Defining dependency "mldev" 00:02:21.669 Message: lib/rib: Defining dependency "rib" 00:02:21.669 Message: lib/reorder: Defining dependency "reorder" 00:02:21.669 Message: lib/sched: Defining dependency "sched" 00:02:21.669 Message: lib/security: Defining dependency "security" 00:02:21.669 Message: lib/stack: Defining dependency "stack" 00:02:21.669 Has header "linux/userfaultfd.h" : YES 00:02:21.669 Has header "linux/vduse.h" : YES 00:02:21.669 Message: lib/vhost: Defining dependency "vhost" 00:02:21.669 Message: lib/ipsec: Defining dependency "ipsec" 00:02:21.669 Message: lib/pdcp: Defining dependency "pdcp" 00:02:21.669 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:21.669 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:21.669 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:21.669 Message: lib/fib: Defining dependency "fib" 00:02:21.669 Message: lib/port: Defining dependency "port" 00:02:21.669 Message: lib/pdump: Defining dependency "pdump" 00:02:21.669 Message: lib/table: Defining dependency "table" 00:02:21.669 Message: lib/pipeline: Defining dependency "pipeline" 00:02:21.669 Message: lib/graph: Defining dependency "graph" 00:02:21.669 Message: lib/node: Defining dependency "node" 00:02:21.669 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:22.611 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:22.611 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:22.611 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:22.611 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:22.611 Compiler for C supports arguments -Wno-unused-value: YES 00:02:22.611 Compiler for C supports arguments -Wno-format: YES 00:02:22.611 Compiler for C supports arguments -Wno-format-security: YES 00:02:22.611 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:22.611 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:22.611 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:22.611 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:22.611 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:22.611 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:22.611 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:22.611 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:22.611 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:22.611 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:22.611 Has header "sys/epoll.h" : YES 00:02:22.611 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:22.611 Configuring doxy-api-html.conf using configuration 00:02:22.611 Configuring doxy-api-man.conf using configuration 00:02:22.611 Program mandb found: YES (/usr/bin/mandb) 00:02:22.611 Program sphinx-build found: NO 00:02:22.611 Configuring rte_build_config.h using configuration 00:02:22.611 Message: 00:02:22.611 ================= 00:02:22.611 Applications Enabled 00:02:22.611 ================= 00:02:22.611 00:02:22.611 apps: 00:02:22.611 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:22.611 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:22.611 test-pmd, test-regex, test-sad, test-security-perf, 00:02:22.611 00:02:22.611 Message: 00:02:22.611 ================= 00:02:22.611 Libraries Enabled 00:02:22.611 ================= 00:02:22.611 00:02:22.611 libs: 00:02:22.611 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:22.611 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:22.611 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:22.611 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:22.611 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:22.611 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:22.611 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:22.611 00:02:22.611 00:02:22.611 Message: 00:02:22.611 =============== 00:02:22.611 Drivers Enabled 00:02:22.611 =============== 00:02:22.611 00:02:22.611 common: 00:02:22.611 00:02:22.611 bus: 00:02:22.611 pci, vdev, 00:02:22.611 mempool: 00:02:22.611 ring, 00:02:22.611 dma: 00:02:22.611 00:02:22.611 net: 00:02:22.611 i40e, 00:02:22.611 raw: 00:02:22.611 00:02:22.611 crypto: 00:02:22.611 00:02:22.611 compress: 00:02:22.611 00:02:22.611 regex: 00:02:22.611 00:02:22.611 ml: 00:02:22.611 00:02:22.611 vdpa: 00:02:22.611 00:02:22.611 event: 00:02:22.611 00:02:22.611 baseband: 00:02:22.611 00:02:22.611 gpu: 00:02:22.611 00:02:22.611 00:02:22.611 Message: 00:02:22.611 ================= 00:02:22.611 Content Skipped 00:02:22.611 ================= 00:02:22.611 00:02:22.611 apps: 00:02:22.611 00:02:22.611 libs: 00:02:22.611 00:02:22.611 drivers: 00:02:22.611 common/cpt: not in enabled drivers build config 00:02:22.611 common/dpaax: not in enabled drivers build config 00:02:22.611 common/iavf: not in enabled drivers build config 00:02:22.611 common/idpf: not in enabled drivers build config 00:02:22.611 common/mvep: not in enabled drivers build config 00:02:22.611 common/octeontx: not in enabled drivers build config 00:02:22.611 bus/auxiliary: not in enabled drivers build config 00:02:22.611 bus/cdx: not in enabled drivers build config 00:02:22.611 bus/dpaa: not in enabled drivers build config 00:02:22.611 bus/fslmc: not in enabled drivers build config 00:02:22.611 bus/ifpga: not in enabled drivers build config 00:02:22.611 bus/platform: not in enabled drivers build config 00:02:22.611 bus/vmbus: not in enabled drivers build config 00:02:22.611 common/cnxk: not in enabled drivers build config 00:02:22.611 common/mlx5: not in enabled drivers build config 00:02:22.611 common/nfp: not in enabled drivers build config 00:02:22.612 common/qat: not in enabled drivers build config 00:02:22.612 common/sfc_efx: not in enabled drivers build config 00:02:22.612 mempool/bucket: not in enabled drivers build config 00:02:22.612 mempool/cnxk: not in enabled drivers build config 00:02:22.612 mempool/dpaa: not in enabled drivers build config 00:02:22.612 mempool/dpaa2: not in enabled drivers build config 00:02:22.612 mempool/octeontx: not in enabled drivers build config 00:02:22.612 mempool/stack: not in enabled drivers build config 00:02:22.612 dma/cnxk: not in enabled drivers build config 00:02:22.612 dma/dpaa: not in enabled drivers build config 00:02:22.612 dma/dpaa2: not in enabled drivers build config 00:02:22.612 dma/hisilicon: not in enabled drivers build config 00:02:22.612 dma/idxd: not in enabled drivers build config 00:02:22.612 dma/ioat: not in enabled drivers build config 00:02:22.612 dma/skeleton: not in enabled drivers build config 00:02:22.612 net/af_packet: not in enabled drivers build config 00:02:22.612 net/af_xdp: not in enabled drivers build config 00:02:22.612 net/ark: not in enabled drivers build config 00:02:22.612 net/atlantic: not in enabled drivers build config 00:02:22.612 net/avp: not in enabled drivers build config 00:02:22.612 net/axgbe: not in enabled drivers build config 00:02:22.612 net/bnx2x: not in enabled drivers build config 00:02:22.612 net/bnxt: not in enabled drivers build config 00:02:22.612 net/bonding: not in enabled drivers build config 00:02:22.612 net/cnxk: not in enabled drivers build config 00:02:22.612 net/cpfl: not in enabled drivers build config 00:02:22.612 net/cxgbe: not in enabled drivers build config 00:02:22.612 net/dpaa: not in enabled drivers build config 00:02:22.612 net/dpaa2: not in enabled drivers build config 00:02:22.612 net/e1000: not in enabled drivers build config 00:02:22.612 net/ena: not in enabled drivers build config 00:02:22.612 net/enetc: not in enabled drivers build config 00:02:22.612 net/enetfec: not in enabled drivers build config 00:02:22.612 net/enic: not in enabled drivers build config 00:02:22.612 net/failsafe: not in enabled drivers build config 00:02:22.612 net/fm10k: not in enabled drivers build config 00:02:22.612 net/gve: not in enabled drivers build config 00:02:22.612 net/hinic: not in enabled drivers build config 00:02:22.612 net/hns3: not in enabled drivers build config 00:02:22.612 net/iavf: not in enabled drivers build config 00:02:22.612 net/ice: not in enabled drivers build config 00:02:22.612 net/idpf: not in enabled drivers build config 00:02:22.612 net/igc: not in enabled drivers build config 00:02:22.612 net/ionic: not in enabled drivers build config 00:02:22.612 net/ipn3ke: not in enabled drivers build config 00:02:22.612 net/ixgbe: not in enabled drivers build config 00:02:22.612 net/mana: not in enabled drivers build config 00:02:22.612 net/memif: not in enabled drivers build config 00:02:22.612 net/mlx4: not in enabled drivers build config 00:02:22.612 net/mlx5: not in enabled drivers build config 00:02:22.612 net/mvneta: not in enabled drivers build config 00:02:22.612 net/mvpp2: not in enabled drivers build config 00:02:22.612 net/netvsc: not in enabled drivers build config 00:02:22.612 net/nfb: not in enabled drivers build config 00:02:22.612 net/nfp: not in enabled drivers build config 00:02:22.612 net/ngbe: not in enabled drivers build config 00:02:22.612 net/null: not in enabled drivers build config 00:02:22.612 net/octeontx: not in enabled drivers build config 00:02:22.612 net/octeon_ep: not in enabled drivers build config 00:02:22.612 net/pcap: not in enabled drivers build config 00:02:22.612 net/pfe: not in enabled drivers build config 00:02:22.612 net/qede: not in enabled drivers build config 00:02:22.612 net/ring: not in enabled drivers build config 00:02:22.612 net/sfc: not in enabled drivers build config 00:02:22.612 net/softnic: not in enabled drivers build config 00:02:22.612 net/tap: not in enabled drivers build config 00:02:22.612 net/thunderx: not in enabled drivers build config 00:02:22.612 net/txgbe: not in enabled drivers build config 00:02:22.612 net/vdev_netvsc: not in enabled drivers build config 00:02:22.612 net/vhost: not in enabled drivers build config 00:02:22.612 net/virtio: not in enabled drivers build config 00:02:22.612 net/vmxnet3: not in enabled drivers build config 00:02:22.612 raw/cnxk_bphy: not in enabled drivers build config 00:02:22.612 raw/cnxk_gpio: not in enabled drivers build config 00:02:22.612 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:22.612 raw/ifpga: not in enabled drivers build config 00:02:22.612 raw/ntb: not in enabled drivers build config 00:02:22.612 raw/skeleton: not in enabled drivers build config 00:02:22.612 crypto/armv8: not in enabled drivers build config 00:02:22.612 crypto/bcmfs: not in enabled drivers build config 00:02:22.612 crypto/caam_jr: not in enabled drivers build config 00:02:22.612 crypto/ccp: not in enabled drivers build config 00:02:22.612 crypto/cnxk: not in enabled drivers build config 00:02:22.612 crypto/dpaa_sec: not in enabled drivers build config 00:02:22.612 crypto/dpaa2_sec: not in enabled drivers build config 00:02:22.612 crypto/ipsec_mb: not in enabled drivers build config 00:02:22.612 crypto/mlx5: not in enabled drivers build config 00:02:22.612 crypto/mvsam: not in enabled drivers build config 00:02:22.612 crypto/nitrox: not in enabled drivers build config 00:02:22.612 crypto/null: not in enabled drivers build config 00:02:22.612 crypto/octeontx: not in enabled drivers build config 00:02:22.612 crypto/openssl: not in enabled drivers build config 00:02:22.612 crypto/scheduler: not in enabled drivers build config 00:02:22.612 crypto/uadk: not in enabled drivers build config 00:02:22.612 crypto/virtio: not in enabled drivers build config 00:02:22.612 compress/isal: not in enabled drivers build config 00:02:22.612 compress/mlx5: not in enabled drivers build config 00:02:22.612 compress/octeontx: not in enabled drivers build config 00:02:22.612 compress/zlib: not in enabled drivers build config 00:02:22.612 regex/mlx5: not in enabled drivers build config 00:02:22.612 regex/cn9k: not in enabled drivers build config 00:02:22.612 ml/cnxk: not in enabled drivers build config 00:02:22.612 vdpa/ifc: not in enabled drivers build config 00:02:22.612 vdpa/mlx5: not in enabled drivers build config 00:02:22.612 vdpa/nfp: not in enabled drivers build config 00:02:22.612 vdpa/sfc: not in enabled drivers build config 00:02:22.612 event/cnxk: not in enabled drivers build config 00:02:22.612 event/dlb2: not in enabled drivers build config 00:02:22.612 event/dpaa: not in enabled drivers build config 00:02:22.612 event/dpaa2: not in enabled drivers build config 00:02:22.612 event/dsw: not in enabled drivers build config 00:02:22.612 event/opdl: not in enabled drivers build config 00:02:22.612 event/skeleton: not in enabled drivers build config 00:02:22.612 event/sw: not in enabled drivers build config 00:02:22.612 event/octeontx: not in enabled drivers build config 00:02:22.612 baseband/acc: not in enabled drivers build config 00:02:22.612 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:22.612 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:22.612 baseband/la12xx: not in enabled drivers build config 00:02:22.612 baseband/null: not in enabled drivers build config 00:02:22.612 baseband/turbo_sw: not in enabled drivers build config 00:02:22.612 gpu/cuda: not in enabled drivers build config 00:02:22.612 00:02:22.612 00:02:22.612 Build targets in project: 217 00:02:22.612 00:02:22.612 DPDK 23.11.0 00:02:22.612 00:02:22.612 User defined options 00:02:22.612 libdir : lib 00:02:22.612 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:22.612 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:22.612 c_link_args : 00:02:22.612 enable_docs : false 00:02:22.612 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:22.612 enable_kmods : false 00:02:22.612 machine : native 00:02:22.612 tests : false 00:02:22.612 00:02:22.612 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:22.612 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:22.612 02:43:37 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 00:02:22.612 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:22.878 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:22.878 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:22.878 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:22.878 [4/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:22.878 [5/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:22.878 [6/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:22.878 [7/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:22.878 [8/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:22.878 [9/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:22.878 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:22.878 [11/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:22.878 [12/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:22.878 [13/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:22.878 [14/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:22.878 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:22.878 [16/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:22.878 [17/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:22.878 [18/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:22.878 [19/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:22.878 [20/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:22.878 [21/707] Linking static target lib/librte_kvargs.a 00:02:23.143 [22/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:23.143 [23/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:23.143 [24/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:23.143 [25/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:23.143 [26/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:23.143 [27/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:23.143 [28/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:23.143 [29/707] Linking static target lib/librte_pci.a 00:02:23.143 [30/707] Linking static target lib/librte_log.a 00:02:23.143 [31/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:23.143 [32/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:23.143 [33/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:23.143 [34/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:23.143 [35/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:23.425 [36/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.425 [37/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:23.425 [38/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:23.425 [39/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:23.425 [40/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:23.425 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:23.425 [42/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:23.425 [43/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:23.425 [44/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:23.425 [45/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:23.425 [46/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:23.425 [47/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:23.425 [48/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:23.425 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:23.425 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:23.425 [51/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:23.425 [52/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:23.425 [53/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:23.425 [54/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:23.700 [55/707] Linking static target lib/librte_ring.a 00:02:23.700 [56/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:23.700 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:23.700 [58/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:23.700 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:23.700 [60/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:23.700 [61/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:23.700 [62/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.700 [63/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:23.700 [64/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:23.700 [65/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:23.700 [66/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:23.700 [67/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:23.700 [68/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:23.700 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:23.700 [70/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:23.700 [71/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:23.700 [72/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:23.700 [73/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:23.700 [74/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:23.700 [75/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:23.700 [76/707] Linking static target lib/librte_meter.a 00:02:23.700 [77/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:23.700 [78/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:23.700 [79/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:23.700 [80/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:23.700 [81/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:23.700 [82/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:23.700 [83/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:23.700 [84/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:23.700 [85/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:23.700 [86/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:23.700 [87/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:23.700 [88/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:23.700 [89/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:23.700 [90/707] Linking static target lib/librte_metrics.a 00:02:23.700 [91/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:23.700 [92/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:23.700 [93/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:23.700 [94/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:23.700 [95/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:23.965 [96/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:23.965 [97/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:23.965 [98/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:23.965 [99/707] Linking static target lib/librte_cmdline.a 00:02:23.965 [100/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:23.965 [101/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:23.965 [102/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:23.965 [103/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:23.965 [104/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:23.965 [105/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:23.965 [106/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:23.965 [107/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:23.965 [108/707] Linking static target lib/librte_net.a 00:02:23.965 [109/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.965 [110/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.965 [111/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:23.965 [112/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:23.965 [113/707] Linking static target lib/librte_cfgfile.a 00:02:24.238 [114/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:24.238 [115/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:24.238 [116/707] Linking target lib/librte_log.so.24.0 00:02:24.238 [117/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:24.238 [118/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:24.238 [119/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:24.238 [120/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.238 [121/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:24.238 [122/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:24.238 [123/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:24.238 [124/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:24.238 [125/707] Linking static target lib/librte_bitratestats.a 00:02:24.238 [126/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:24.238 [127/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:24.238 [128/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:24.238 [129/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:24.238 [130/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:24.238 [131/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:24.238 [132/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:24.238 [133/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:24.238 [134/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:24.238 [135/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:24.238 [136/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:24.238 [137/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:24.238 [138/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:24.238 [139/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:24.517 [140/707] Linking static target lib/librte_timer.a 00:02:24.517 [141/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:24.517 [142/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:24.517 [143/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:24.517 [144/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.517 [145/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:24.517 [146/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:24.517 [147/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:24.517 [148/707] Linking static target lib/librte_mempool.a 00:02:24.517 [149/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:24.517 [150/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:24.517 [151/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:24.517 [152/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:24.517 [153/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:24.517 [154/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:24.517 [155/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:24.517 [156/707] Linking static target lib/librte_compressdev.a 00:02:24.517 [157/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.517 [158/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:24.517 [159/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:24.517 [160/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:24.517 [161/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:24.517 [162/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:24.517 [163/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:24.517 [164/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:24.517 [165/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:24.517 [166/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:24.517 [167/707] Linking static target lib/librte_rcu.a 00:02:24.797 [168/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:24.797 [169/707] Linking static target lib/librte_telemetry.a 00:02:24.797 [170/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:24.797 [171/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.797 [172/707] Linking static target lib/librte_bbdev.a 00:02:24.797 [173/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:24.797 [174/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:24.797 [175/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.797 [176/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:24.797 [177/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:24.797 [178/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:24.797 [179/707] Linking static target lib/librte_dispatcher.a 00:02:24.797 [180/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:24.797 [181/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:24.797 [182/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:24.797 [183/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:24.797 [184/707] Linking static target lib/librte_jobstats.a 00:02:24.798 [185/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:24.798 [186/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:24.798 [187/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:24.798 [188/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:24.798 [189/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:24.798 [190/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:24.798 [191/707] Linking static target lib/librte_gpudev.a 00:02:24.798 [192/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:25.074 [193/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:25.074 [194/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:25.074 [195/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:25.074 [196/707] Linking static target lib/librte_dmadev.a 00:02:25.074 [197/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:25.074 [198/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:25.074 [199/707] Linking static target lib/librte_latencystats.a 00:02:25.074 [200/707] Linking static target lib/librte_mbuf.a 00:02:25.074 [201/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:25.074 [202/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:25.074 [203/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.074 [204/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:25.074 [205/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:25.074 [206/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:25.074 [207/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:25.074 [208/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:25.074 [209/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:25.074 [210/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:25.074 [211/707] Linking static target lib/librte_gro.a 00:02:25.074 [212/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:25.074 [213/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:25.075 [214/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:25.075 [215/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:25.075 [216/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:25.075 [217/707] Linking static target lib/librte_distributor.a 00:02:25.075 [218/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:25.075 [219/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:25.075 [220/707] Linking static target lib/librte_ip_frag.a 00:02:25.343 [221/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:25.343 [222/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:25.343 [223/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:25.343 [224/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:25.343 [225/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.343 [226/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:25.343 [227/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:25.343 [228/707] Linking static target lib/librte_gso.a 00:02:25.343 [229/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:25.343 [230/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:25.343 [231/707] Linking static target lib/librte_regexdev.a 00:02:25.343 [232/707] Linking static target lib/librte_eal.a 00:02:25.343 [233/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:25.343 [234/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:25.343 [235/707] Linking static target lib/librte_stack.a 00:02:25.343 [236/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:25.343 [237/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:25.343 [238/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:25.343 [239/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:25.343 [240/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:25.343 [241/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:25.343 [242/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:25.343 [243/707] Linking static target lib/librte_rawdev.a 00:02:25.343 [244/707] Linking static target lib/librte_mldev.a 00:02:25.343 [245/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:25.343 [246/707] Linking static target lib/librte_pcapng.a 00:02:25.343 [247/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:25.615 [248/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.615 [249/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:25.615 [250/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.615 [251/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.615 [252/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.615 [253/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.615 [254/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:25.615 [255/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:25.615 [256/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:25.615 [257/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.615 [258/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:25.615 [259/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:25.615 [260/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:25.615 [261/707] Linking static target lib/librte_bpf.a 00:02:25.615 [262/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.615 [263/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:25.615 [264/707] Linking static target lib/librte_security.a 00:02:25.615 [265/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:25.615 [266/707] Linking static target lib/librte_power.a 00:02:25.615 [267/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:25.615 [268/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.615 [269/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:25.615 [270/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.615 [271/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:25.615 [272/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.878 [273/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.878 [274/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:25.878 [275/707] Linking static target lib/librte_reorder.a 00:02:25.878 [276/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.878 [277/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:25.878 [278/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:25.878 [279/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:25.878 [280/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:25.878 [281/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.878 [282/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.878 [283/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:25.878 [284/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:25.878 [285/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:25.878 [286/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:25.878 [287/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.878 [288/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:25.878 [289/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:25.878 [290/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:25.878 [291/707] Linking static target lib/librte_rib.a 00:02:26.145 [292/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:26.145 [293/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.145 [294/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:26.145 [295/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:26.145 [296/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:26.145 [297/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:26.145 [298/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.145 [299/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:26.145 [300/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:26.145 [301/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:26.145 [302/707] Linking static target lib/librte_lpm.a 00:02:26.145 [303/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:26.412 [304/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:26.412 [305/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.412 [306/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:26.412 [307/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:26.412 [308/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:26.412 [309/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:26.412 [310/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:26.412 [311/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:26.412 [312/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:26.412 [313/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:26.412 [314/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:26.412 [315/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:26.412 [316/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.412 [317/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:26.412 [318/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.412 [319/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:26.412 [320/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:26.412 [321/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:26.412 [322/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:26.412 [323/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:26.412 [324/707] Linking static target lib/librte_efd.a 00:02:26.684 [325/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:26.684 [326/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:26.684 [327/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.684 [328/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:26.684 [329/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:26.684 [330/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:26.684 [331/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.684 [332/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:26.684 [333/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:26.684 [334/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:26.684 [335/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:26.684 [336/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:26.684 [337/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:26.684 [338/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:26.684 [339/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:26.684 [340/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:26.684 [341/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:26.684 [342/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:26.684 [343/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:26.684 [344/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:26.684 [345/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:26.684 [346/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.684 [347/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:26.951 [348/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.951 [349/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:26.951 [350/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:26.951 [351/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:26.951 [352/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:26.951 [353/707] Linking static target lib/librte_fib.a 00:02:26.951 [354/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.951 [355/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:26.951 [356/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:26.951 [357/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:26.951 [358/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:26.951 [359/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:26.951 [360/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:26.951 [361/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.951 [362/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:27.217 [363/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:27.217 [364/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:27.217 [365/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:27.217 [366/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:27.217 [367/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:27.217 [368/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:27.217 [369/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:27.217 [370/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:27.217 [371/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:27.217 [372/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:27.488 [373/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:27.488 [374/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:27.488 [375/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:27.488 [376/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:27.488 [377/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:27.488 [378/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:27.488 [379/707] Linking static target lib/librte_graph.a 00:02:27.488 [380/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:27.488 [381/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:27.488 [382/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:27.488 [383/707] Linking static target lib/librte_pdump.a 00:02:27.488 [384/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:27.488 [385/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:27.488 [386/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:27.488 [387/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:27.488 [388/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:27.488 [389/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.488 [390/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:27.488 [391/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:27.488 [392/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:27.757 [393/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:27.757 [394/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:27.757 [395/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:27.757 [396/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:27.757 [397/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:27.757 [398/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:27.757 [399/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:27.757 [400/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:27.757 [401/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:27.757 [402/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:27.757 [403/707] Linking target lib/librte_kvargs.so.24.0 00:02:27.757 [404/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:27.757 [405/707] Linking static target lib/librte_sched.a 00:02:27.757 [406/707] Linking target lib/librte_telemetry.so.24.0 00:02:27.757 [407/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:27.757 [408/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:27.757 [409/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:27.757 [410/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:27.757 [411/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:27.757 [412/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:27.757 [413/707] Linking static target lib/acl/libavx2_tmp.a 00:02:27.757 [414/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:27.757 [415/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:27.757 [416/707] Linking static target lib/librte_member.a 00:02:27.757 [417/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:28.028 [418/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:28.028 [419/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:28.028 [420/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:28.028 [421/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:28.028 [422/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:28.028 [423/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:28.028 [424/707] Linking static target lib/librte_cryptodev.a 00:02:28.028 [425/707] Linking static target lib/librte_ipsec.a 00:02:28.028 [426/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:28.028 [427/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:28.028 [428/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.028 [429/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.028 [430/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:28.028 [431/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:28.028 [432/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:28.028 [433/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:28.028 [434/707] Linking static target lib/librte_table.a 00:02:28.028 [435/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:28.028 [436/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:28.028 [437/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:28.028 [438/707] Linking static target drivers/librte_bus_pci.a 00:02:28.028 [439/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:28.028 [440/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:28.028 [441/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:28.028 [442/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:28.028 [443/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:28.028 [444/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:28.028 [445/707] Linking static target drivers/librte_bus_vdev.a 00:02:28.028 [446/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:28.028 [447/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:28.028 [448/707] Linking static target lib/librte_pdcp.a 00:02:28.028 [449/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:28.292 [450/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:28.292 [451/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:28.292 [452/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:28.292 [453/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:28.292 [454/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:28.292 [455/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:28.292 [456/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:28.292 [457/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:28.292 [458/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:28.292 [459/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:28.564 [460/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:28.564 [461/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:28.564 [462/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.564 [463/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:28.564 [464/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:28.564 [465/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:28.564 [466/707] Linking static target lib/librte_port.a 00:02:28.564 [467/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:28.564 [468/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:28.564 [469/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:28.564 [470/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:28.564 [471/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:28.564 [472/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.564 [473/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:28.564 [474/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:28.564 [475/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:28.564 [476/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:28.564 [477/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:28.564 [478/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:28.564 [479/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:28.830 [480/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:28.830 [481/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:28.830 [482/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:28.830 [483/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:28.830 [484/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:28.830 [485/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:28.830 [486/707] Linking static target drivers/librte_mempool_ring.a 00:02:28.830 [487/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.830 [488/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:28.830 [489/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:28.830 [490/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:28.830 [491/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.830 [492/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.830 [493/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:28.830 [494/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:28.830 [495/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:28.830 [496/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:28.830 [497/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:28.830 [498/707] Linking static target lib/librte_acl.a 00:02:28.830 [499/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.830 [500/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:28.830 [501/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:28.830 [502/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:28.830 [503/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:28.830 [504/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:28.830 [505/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:29.092 [506/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:29.092 [507/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:29.092 [508/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:29.092 [509/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:29.092 [510/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.092 [511/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:29.092 [512/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:29.092 [513/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:29.092 [514/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:29.092 [515/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:29.092 [516/707] Linking static target lib/librte_node.a 00:02:29.092 [517/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:29.092 [518/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:29.092 [519/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:29.092 [520/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:29.092 [521/707] Linking static target lib/librte_eventdev.a 00:02:29.092 [522/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:29.092 [523/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:29.353 [524/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.353 [525/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.353 [526/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.353 [527/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:29.353 [528/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:29.353 [529/707] Linking static target lib/librte_hash.a 00:02:29.353 [530/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:29.353 [531/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:29.353 [532/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:29.353 [533/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:29.353 [534/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:29.353 [535/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:29.353 [536/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:29.353 [537/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:29.353 [538/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:29.353 [539/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:29.353 [540/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:29.353 [541/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:29.353 [542/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:29.353 [543/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:29.353 [544/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:29.353 [545/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:29.612 [546/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:29.612 [547/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:29.612 [548/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.612 [549/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:29.612 [550/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:29.612 [551/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:29.612 [552/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:29.612 [553/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:29.612 [554/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:29.612 [555/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:29.612 [556/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:29.612 [557/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:29.612 [558/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:29.612 [559/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:29.612 [560/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:29.871 [561/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:29.871 [562/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.871 [563/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:29.871 [564/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:29.871 [565/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:29.871 [566/707] Linking static target lib/librte_ethdev.a 00:02:29.871 [567/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:29.871 [568/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:29.871 [569/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:29.871 [570/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.129 [571/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:30.129 [572/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:30.129 [573/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:30.389 [574/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:30.648 [575/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:30.648 [576/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:30.907 [577/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:30.907 [578/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:31.167 [579/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:31.425 [580/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:31.425 [581/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:31.425 [582/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:31.995 [583/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:31.995 [584/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:31.995 [585/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:31.995 [586/707] Linking static target drivers/librte_net_i40e.a 00:02:31.995 [587/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:32.255 [588/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.824 [589/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:32.824 [590/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.082 [591/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:34.462 [592/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:36.370 [593/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.370 [594/707] Linking target lib/librte_eal.so.24.0 00:02:36.370 [595/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:36.370 [596/707] Linking target lib/librte_timer.so.24.0 00:02:36.370 [597/707] Linking target lib/librte_pci.so.24.0 00:02:36.371 [598/707] Linking target lib/librte_meter.so.24.0 00:02:36.371 [599/707] Linking target lib/librte_ring.so.24.0 00:02:36.371 [600/707] Linking target lib/librte_cfgfile.so.24.0 00:02:36.371 [601/707] Linking target lib/librte_jobstats.so.24.0 00:02:36.371 [602/707] Linking target lib/librte_stack.so.24.0 00:02:36.371 [603/707] Linking target lib/librte_rawdev.so.24.0 00:02:36.371 [604/707] Linking target lib/librte_dmadev.so.24.0 00:02:36.371 [605/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:36.371 [606/707] Linking target lib/librte_acl.so.24.0 00:02:36.371 [607/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:36.371 [608/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:36.371 [609/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:36.371 [610/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:36.371 [611/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:36.371 [612/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:36.371 [613/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:36.371 [614/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:36.371 [615/707] Linking target lib/librte_rcu.so.24.0 00:02:36.371 [616/707] Linking target lib/librte_mempool.so.24.0 00:02:36.629 [617/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:36.629 [618/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:36.629 [619/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:36.629 [620/707] Linking target lib/librte_rib.so.24.0 00:02:36.629 [621/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:36.629 [622/707] Linking target lib/librte_mbuf.so.24.0 00:02:36.629 [623/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:36.888 [624/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:36.888 [625/707] Linking target lib/librte_fib.so.24.0 00:02:36.888 [626/707] Linking target lib/librte_compressdev.so.24.0 00:02:36.888 [627/707] Linking target lib/librte_bbdev.so.24.0 00:02:36.888 [628/707] Linking target lib/librte_gpudev.so.24.0 00:02:36.888 [629/707] Linking target lib/librte_distributor.so.24.0 00:02:36.888 [630/707] Linking target lib/librte_regexdev.so.24.0 00:02:36.889 [631/707] Linking target lib/librte_reorder.so.24.0 00:02:36.889 [632/707] Linking target lib/librte_net.so.24.0 00:02:36.889 [633/707] Linking target lib/librte_mldev.so.24.0 00:02:36.889 [634/707] Linking target lib/librte_sched.so.24.0 00:02:36.889 [635/707] Linking target lib/librte_cryptodev.so.24.0 00:02:36.889 [636/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:36.889 [637/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:36.889 [638/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:36.889 [639/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:36.889 [640/707] Linking target lib/librte_cmdline.so.24.0 00:02:36.889 [641/707] Linking target lib/librte_security.so.24.0 00:02:36.889 [642/707] Linking target lib/librte_hash.so.24.0 00:02:37.149 [643/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:37.149 [644/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:37.149 [645/707] Linking target lib/librte_efd.so.24.0 00:02:37.149 [646/707] Linking target lib/librte_lpm.so.24.0 00:02:37.149 [647/707] Linking target lib/librte_pdcp.so.24.0 00:02:37.149 [648/707] Linking target lib/librte_member.so.24.0 00:02:37.149 [649/707] Linking target lib/librte_ipsec.so.24.0 00:02:37.149 [650/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.408 [651/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:37.408 [652/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:37.408 [653/707] Linking target lib/librte_ethdev.so.24.0 00:02:37.408 [654/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:37.408 [655/707] Linking target lib/librte_pcapng.so.24.0 00:02:37.408 [656/707] Linking target lib/librte_metrics.so.24.0 00:02:37.408 [657/707] Linking target lib/librte_gso.so.24.0 00:02:37.408 [658/707] Linking target lib/librte_gro.so.24.0 00:02:37.408 [659/707] Linking target lib/librte_power.so.24.0 00:02:37.408 [660/707] Linking target lib/librte_ip_frag.so.24.0 00:02:37.408 [661/707] Linking target lib/librte_bpf.so.24.0 00:02:37.408 [662/707] Linking target lib/librte_eventdev.so.24.0 00:02:37.668 [663/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:37.668 [664/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:37.668 [665/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:37.668 [666/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:37.668 [667/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:37.668 [668/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:37.668 [669/707] Linking target lib/librte_graph.so.24.0 00:02:37.668 [670/707] Linking target lib/librte_bitratestats.so.24.0 00:02:37.668 [671/707] Linking target lib/librte_latencystats.so.24.0 00:02:37.668 [672/707] Linking target lib/librte_dispatcher.so.24.0 00:02:37.668 [673/707] Linking target lib/librte_pdump.so.24.0 00:02:37.668 [674/707] Linking target lib/librte_port.so.24.0 00:02:37.927 [675/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:37.927 [676/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:37.927 [677/707] Linking target lib/librte_node.so.24.0 00:02:37.927 [678/707] Linking target lib/librte_table.so.24.0 00:02:37.927 [679/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:39.832 [680/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:39.832 [681/707] Linking static target lib/librte_pipeline.a 00:02:41.212 [682/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:41.212 [683/707] Linking static target lib/librte_vhost.a 00:02:41.472 [684/707] Linking target app/dpdk-test-fib 00:02:41.731 [685/707] Linking target app/dpdk-test-sad 00:02:41.731 [686/707] Linking target app/dpdk-test-gpudev 00:02:41.731 [687/707] Linking target app/dpdk-proc-info 00:02:41.731 [688/707] Linking target app/dpdk-test-pipeline 00:02:41.731 [689/707] Linking target app/dpdk-dumpcap 00:02:41.731 [690/707] Linking target app/dpdk-test-security-perf 00:02:41.731 [691/707] Linking target app/dpdk-test-bbdev 00:02:41.731 [692/707] Linking target app/dpdk-graph 00:02:41.731 [693/707] Linking target app/dpdk-test-dma-perf 00:02:41.731 [694/707] Linking target app/dpdk-test-cmdline 00:02:41.731 [695/707] Linking target app/dpdk-pdump 00:02:41.731 [696/707] Linking target app/dpdk-test-mldev 00:02:41.731 [697/707] Linking target app/dpdk-test-flow-perf 00:02:41.731 [698/707] Linking target app/dpdk-test-regex 00:02:41.731 [699/707] Linking target app/dpdk-test-acl 00:02:41.731 [700/707] Linking target app/dpdk-test-compress-perf 00:02:41.731 [701/707] Linking target app/dpdk-test-eventdev 00:02:41.731 [702/707] Linking target app/dpdk-test-crypto-perf 00:02:41.731 [703/707] Linking target app/dpdk-testpmd 00:02:43.115 [704/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.115 [705/707] Linking target lib/librte_vhost.so.24.0 00:02:45.026 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.026 [707/707] Linking target lib/librte_pipeline.so.24.0 00:02:45.026 02:43:59 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:02:45.026 02:43:59 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:45.026 02:43:59 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 install 00:02:45.026 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:45.026 [0/1] Installing files. 00:02:45.290 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.290 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.291 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.292 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:45.293 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.294 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.295 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.296 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:45.296 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:45.296 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.296 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:45.559 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:45.559 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:45.559 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.559 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:45.559 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.559 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.559 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.559 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.559 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.559 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.559 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.559 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.559 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.560 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.560 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.560 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.560 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.560 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.560 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.560 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.560 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.560 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.560 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.560 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.560 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:45.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:45.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:45.564 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:45.564 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:45.564 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:45.564 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:45.564 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:45.564 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:45.564 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:45.564 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:45.564 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:45.564 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:45.564 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:45.564 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:45.564 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:45.564 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:45.564 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:45.564 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:45.564 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:45.564 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:45.564 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:45.564 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:45.564 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:45.564 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:45.564 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:45.564 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:45.564 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:45.564 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:45.564 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:45.564 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:45.564 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:45.564 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:45.564 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:45.564 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:45.564 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:45.564 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:45.564 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:45.564 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:45.564 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:45.564 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:45.564 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:45.564 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:45.564 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:45.564 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:45.564 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:45.564 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:45.564 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:45.564 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:45.564 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:45.564 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:45.564 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:45.564 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:45.564 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:45.564 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:45.564 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:45.564 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:45.564 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:45.564 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:45.564 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:45.564 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:45.564 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:45.564 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:45.564 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:45.564 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:45.564 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:45.564 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:45.564 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:45.564 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:45.564 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:45.564 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:45.564 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:45.564 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:45.564 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:45.564 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:45.564 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:45.564 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:45.564 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:45.564 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:45.564 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:45.564 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:45.565 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:45.565 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:45.565 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:45.565 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:45.565 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:45.565 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:45.565 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:45.565 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:45.565 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:45.565 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:45.565 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:45.565 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:45.565 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:45.565 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:45.565 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:45.565 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:45.565 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:45.565 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:45.565 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:45.565 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:45.565 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:45.565 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:45.565 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:45.565 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:45.565 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:45.565 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:45.565 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:45.565 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:45.565 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:45.565 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:45.565 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:45.565 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:45.565 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:45.565 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:45.565 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:45.565 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:45.565 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:45.565 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:45.565 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:45.565 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:45.565 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:45.565 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:45.565 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:45.565 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:45.565 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:45.565 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:45.565 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:45.565 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:45.565 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:45.565 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:45.565 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:45.565 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:45.565 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:45.565 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:45.565 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:45.824 02:44:00 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:02:45.824 02:44:00 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:45.824 00:02:45.824 real 0m30.723s 00:02:45.824 user 9m35.395s 00:02:45.824 sys 2m21.525s 00:02:45.824 02:44:00 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:45.824 02:44:00 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:45.824 ************************************ 00:02:45.824 END TEST build_native_dpdk 00:02:45.824 ************************************ 00:02:45.824 02:44:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:45.824 02:44:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:45.824 02:44:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:45.824 02:44:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:45.824 02:44:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:45.824 02:44:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:45.824 02:44:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:45.825 02:44:00 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:45.825 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:46.083 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:46.083 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.083 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:46.341 Using 'verbs' RDMA provider 00:02:59.501 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:11.720 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:11.980 Creating mk/config.mk...done. 00:03:11.980 Creating mk/cc.flags.mk...done. 00:03:11.980 Type 'make' to build. 00:03:11.980 02:44:27 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:03:11.980 02:44:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:11.980 02:44:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:11.980 02:44:27 -- common/autotest_common.sh@10 -- $ set +x 00:03:11.980 ************************************ 00:03:11.980 START TEST make 00:03:11.980 ************************************ 00:03:11.980 02:44:27 make -- common/autotest_common.sh@1129 -- $ make -j96 00:03:13.899 The Meson build system 00:03:13.899 Version: 1.5.0 00:03:13.899 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:13.899 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:13.899 Build type: native build 00:03:13.899 Project name: libvfio-user 00:03:13.899 Project version: 0.0.1 00:03:13.899 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:13.899 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:13.899 Host machine cpu family: x86_64 00:03:13.899 Host machine cpu: x86_64 00:03:13.899 Run-time dependency threads found: YES 00:03:13.899 Library dl found: YES 00:03:13.899 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:13.899 Run-time dependency json-c found: YES 0.17 00:03:13.899 Run-time dependency cmocka found: YES 1.1.7 00:03:13.899 Program pytest-3 found: NO 00:03:13.899 Program flake8 found: NO 00:03:13.899 Program misspell-fixer found: NO 00:03:13.899 Program restructuredtext-lint found: NO 00:03:13.899 Program valgrind found: YES (/usr/bin/valgrind) 00:03:13.899 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:13.899 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:13.899 Compiler for C supports arguments -Wwrite-strings: YES 00:03:13.899 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:13.899 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:13.899 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:13.899 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:13.899 Build targets in project: 8 00:03:13.899 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:13.899 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:13.899 00:03:13.899 libvfio-user 0.0.1 00:03:13.899 00:03:13.899 User defined options 00:03:13.899 buildtype : debug 00:03:13.899 default_library: shared 00:03:13.899 libdir : /usr/local/lib 00:03:13.899 00:03:13.899 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:14.839 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:14.839 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:14.839 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:14.839 [3/37] Compiling C object samples/null.p/null.c.o 00:03:14.839 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:14.839 [5/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:14.839 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:14.839 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:14.839 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:14.839 [9/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:14.839 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:14.839 [11/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:14.839 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:14.839 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:14.839 [14/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:14.839 [15/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:14.839 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:14.839 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:14.839 [18/37] Compiling C object samples/server.p/server.c.o 00:03:14.839 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:14.839 [20/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:14.839 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:14.839 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:14.839 [23/37] Compiling C object samples/client.p/client.c.o 00:03:14.839 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:14.839 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:14.839 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:14.839 [27/37] Linking target samples/client 00:03:14.839 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:15.099 [29/37] Linking target test/unit_tests 00:03:15.099 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:15.099 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:15.099 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:15.358 [33/37] Linking target samples/null 00:03:15.358 [34/37] Linking target samples/shadow_ioeventfd_server 00:03:15.358 [35/37] Linking target samples/lspci 00:03:15.358 [36/37] Linking target samples/server 00:03:15.358 [37/37] Linking target samples/gpio-pci-idio-16 00:03:15.358 INFO: autodetecting backend as ninja 00:03:15.358 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:15.358 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:15.617 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:15.617 ninja: no work to do. 00:03:42.224 CC lib/log/log.o 00:03:42.224 CC lib/log/log_flags.o 00:03:42.224 CC lib/log/log_deprecated.o 00:03:42.224 CC lib/ut_mock/mock.o 00:03:42.224 CC lib/ut/ut.o 00:03:42.224 LIB libspdk_ut_mock.a 00:03:42.224 LIB libspdk_ut.a 00:03:42.224 LIB libspdk_log.a 00:03:42.224 SO libspdk_ut_mock.so.6.0 00:03:42.224 SO libspdk_ut.so.2.0 00:03:42.224 SO libspdk_log.so.7.1 00:03:42.224 SYMLINK libspdk_ut_mock.so 00:03:42.224 SYMLINK libspdk_ut.so 00:03:42.224 SYMLINK libspdk_log.so 00:03:42.792 CC lib/dma/dma.o 00:03:42.792 CC lib/ioat/ioat.o 00:03:42.792 CXX lib/trace_parser/trace.o 00:03:42.792 CC lib/util/base64.o 00:03:42.792 CC lib/util/bit_array.o 00:03:42.792 CC lib/util/cpuset.o 00:03:42.792 CC lib/util/crc16.o 00:03:42.792 CC lib/util/crc32.o 00:03:42.792 CC lib/util/crc32c.o 00:03:42.792 CC lib/util/crc32_ieee.o 00:03:42.792 CC lib/util/crc64.o 00:03:42.792 CC lib/util/dif.o 00:03:42.792 CC lib/util/fd.o 00:03:42.792 CC lib/util/fd_group.o 00:03:42.792 CC lib/util/file.o 00:03:42.792 CC lib/util/hexlify.o 00:03:42.792 CC lib/util/iov.o 00:03:42.792 CC lib/util/math.o 00:03:42.792 CC lib/util/net.o 00:03:42.792 CC lib/util/pipe.o 00:03:42.792 CC lib/util/strerror_tls.o 00:03:42.792 CC lib/util/string.o 00:03:42.792 CC lib/util/uuid.o 00:03:42.792 CC lib/util/xor.o 00:03:42.792 CC lib/util/zipf.o 00:03:42.792 CC lib/util/md5.o 00:03:42.792 CC lib/vfio_user/host/vfio_user_pci.o 00:03:42.792 CC lib/vfio_user/host/vfio_user.o 00:03:43.051 LIB libspdk_dma.a 00:03:43.051 SO libspdk_dma.so.5.0 00:03:43.051 LIB libspdk_ioat.a 00:03:43.051 SYMLINK libspdk_dma.so 00:03:43.051 SO libspdk_ioat.so.7.0 00:03:43.051 LIB libspdk_vfio_user.a 00:03:43.051 SYMLINK libspdk_ioat.so 00:03:43.051 SO libspdk_vfio_user.so.5.0 00:03:43.311 SYMLINK libspdk_vfio_user.so 00:03:43.311 LIB libspdk_util.a 00:03:43.311 SO libspdk_util.so.10.1 00:03:43.311 SYMLINK libspdk_util.so 00:03:43.880 CC lib/json/json_parse.o 00:03:43.880 CC lib/json/json_util.o 00:03:43.880 CC lib/json/json_write.o 00:03:43.880 CC lib/conf/conf.o 00:03:43.880 CC lib/rdma_utils/rdma_utils.o 00:03:43.880 CC lib/vmd/vmd.o 00:03:43.880 CC lib/vmd/led.o 00:03:43.880 CC lib/idxd/idxd.o 00:03:43.880 CC lib/idxd/idxd_user.o 00:03:43.880 CC lib/idxd/idxd_kernel.o 00:03:43.880 CC lib/env_dpdk/env.o 00:03:43.880 CC lib/env_dpdk/memory.o 00:03:43.880 CC lib/env_dpdk/pci.o 00:03:43.880 CC lib/env_dpdk/init.o 00:03:43.880 CC lib/env_dpdk/threads.o 00:03:43.880 CC lib/env_dpdk/pci_ioat.o 00:03:43.880 CC lib/env_dpdk/pci_virtio.o 00:03:43.880 CC lib/env_dpdk/pci_vmd.o 00:03:43.880 CC lib/env_dpdk/pci_idxd.o 00:03:43.880 CC lib/env_dpdk/pci_event.o 00:03:43.880 CC lib/env_dpdk/sigbus_handler.o 00:03:43.880 CC lib/env_dpdk/pci_dpdk.o 00:03:43.880 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:43.880 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:43.880 LIB libspdk_conf.a 00:03:44.139 LIB libspdk_json.a 00:03:44.139 SO libspdk_conf.so.6.0 00:03:44.139 LIB libspdk_rdma_utils.a 00:03:44.139 SO libspdk_json.so.6.0 00:03:44.139 SO libspdk_rdma_utils.so.1.0 00:03:44.139 SYMLINK libspdk_conf.so 00:03:44.139 SYMLINK libspdk_json.so 00:03:44.139 SYMLINK libspdk_rdma_utils.so 00:03:44.139 LIB libspdk_idxd.a 00:03:44.139 SO libspdk_idxd.so.12.1 00:03:44.139 LIB libspdk_vmd.a 00:03:44.397 SO libspdk_vmd.so.6.0 00:03:44.397 SYMLINK libspdk_idxd.so 00:03:44.397 LIB libspdk_trace_parser.a 00:03:44.397 SYMLINK libspdk_vmd.so 00:03:44.397 SO libspdk_trace_parser.so.6.0 00:03:44.397 CC lib/jsonrpc/jsonrpc_server.o 00:03:44.397 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:44.397 CC lib/jsonrpc/jsonrpc_client.o 00:03:44.397 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:44.397 CC lib/rdma_provider/common.o 00:03:44.397 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:44.397 SYMLINK libspdk_trace_parser.so 00:03:44.656 LIB libspdk_rdma_provider.a 00:03:44.656 LIB libspdk_jsonrpc.a 00:03:44.656 SO libspdk_rdma_provider.so.7.0 00:03:44.656 SO libspdk_jsonrpc.so.6.0 00:03:44.656 SYMLINK libspdk_rdma_provider.so 00:03:44.656 SYMLINK libspdk_jsonrpc.so 00:03:44.916 LIB libspdk_env_dpdk.a 00:03:44.916 SO libspdk_env_dpdk.so.15.1 00:03:44.916 SYMLINK libspdk_env_dpdk.so 00:03:45.175 CC lib/rpc/rpc.o 00:03:45.434 LIB libspdk_rpc.a 00:03:45.434 SO libspdk_rpc.so.6.0 00:03:45.434 SYMLINK libspdk_rpc.so 00:03:45.692 CC lib/keyring/keyring.o 00:03:45.692 CC lib/trace/trace.o 00:03:45.692 CC lib/trace/trace_flags.o 00:03:45.692 CC lib/keyring/keyring_rpc.o 00:03:45.692 CC lib/trace/trace_rpc.o 00:03:45.692 CC lib/notify/notify.o 00:03:45.692 CC lib/notify/notify_rpc.o 00:03:45.951 LIB libspdk_notify.a 00:03:45.951 SO libspdk_notify.so.6.0 00:03:45.951 LIB libspdk_keyring.a 00:03:45.951 LIB libspdk_trace.a 00:03:45.951 SO libspdk_keyring.so.2.0 00:03:45.951 SO libspdk_trace.so.11.0 00:03:45.951 SYMLINK libspdk_notify.so 00:03:45.951 SYMLINK libspdk_keyring.so 00:03:46.210 SYMLINK libspdk_trace.so 00:03:46.468 CC lib/thread/thread.o 00:03:46.468 CC lib/thread/iobuf.o 00:03:46.468 CC lib/sock/sock.o 00:03:46.468 CC lib/sock/sock_rpc.o 00:03:46.727 LIB libspdk_sock.a 00:03:46.727 SO libspdk_sock.so.10.0 00:03:46.986 SYMLINK libspdk_sock.so 00:03:47.245 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:47.245 CC lib/nvme/nvme_ctrlr.o 00:03:47.245 CC lib/nvme/nvme_fabric.o 00:03:47.245 CC lib/nvme/nvme_ns_cmd.o 00:03:47.245 CC lib/nvme/nvme_ns.o 00:03:47.245 CC lib/nvme/nvme_pcie_common.o 00:03:47.245 CC lib/nvme/nvme_pcie.o 00:03:47.245 CC lib/nvme/nvme_qpair.o 00:03:47.245 CC lib/nvme/nvme.o 00:03:47.245 CC lib/nvme/nvme_quirks.o 00:03:47.245 CC lib/nvme/nvme_transport.o 00:03:47.245 CC lib/nvme/nvme_discovery.o 00:03:47.245 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:47.245 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:47.245 CC lib/nvme/nvme_tcp.o 00:03:47.245 CC lib/nvme/nvme_opal.o 00:03:47.245 CC lib/nvme/nvme_io_msg.o 00:03:47.245 CC lib/nvme/nvme_poll_group.o 00:03:47.245 CC lib/nvme/nvme_zns.o 00:03:47.245 CC lib/nvme/nvme_stubs.o 00:03:47.245 CC lib/nvme/nvme_auth.o 00:03:47.245 CC lib/nvme/nvme_cuse.o 00:03:47.245 CC lib/nvme/nvme_vfio_user.o 00:03:47.245 CC lib/nvme/nvme_rdma.o 00:03:47.504 LIB libspdk_thread.a 00:03:47.504 SO libspdk_thread.so.11.0 00:03:47.504 SYMLINK libspdk_thread.so 00:03:48.071 CC lib/fsdev/fsdev.o 00:03:48.071 CC lib/fsdev/fsdev_io.o 00:03:48.071 CC lib/fsdev/fsdev_rpc.o 00:03:48.071 CC lib/accel/accel_rpc.o 00:03:48.071 CC lib/accel/accel.o 00:03:48.071 CC lib/accel/accel_sw.o 00:03:48.071 CC lib/blob/blobstore.o 00:03:48.071 CC lib/blob/request.o 00:03:48.071 CC lib/blob/zeroes.o 00:03:48.071 CC lib/blob/blob_bs_dev.o 00:03:48.071 CC lib/init/json_config.o 00:03:48.071 CC lib/init/subsystem.o 00:03:48.071 CC lib/vfu_tgt/tgt_endpoint.o 00:03:48.071 CC lib/init/subsystem_rpc.o 00:03:48.071 CC lib/vfu_tgt/tgt_rpc.o 00:03:48.071 CC lib/init/rpc.o 00:03:48.071 CC lib/virtio/virtio.o 00:03:48.071 CC lib/virtio/virtio_vhost_user.o 00:03:48.071 CC lib/virtio/virtio_pci.o 00:03:48.071 CC lib/virtio/virtio_vfio_user.o 00:03:48.071 LIB libspdk_init.a 00:03:48.329 SO libspdk_init.so.6.0 00:03:48.329 LIB libspdk_vfu_tgt.a 00:03:48.329 SYMLINK libspdk_init.so 00:03:48.329 LIB libspdk_virtio.a 00:03:48.329 SO libspdk_vfu_tgt.so.3.0 00:03:48.329 SO libspdk_virtio.so.7.0 00:03:48.329 SYMLINK libspdk_vfu_tgt.so 00:03:48.329 SYMLINK libspdk_virtio.so 00:03:48.588 LIB libspdk_fsdev.a 00:03:48.588 SO libspdk_fsdev.so.2.0 00:03:48.588 SYMLINK libspdk_fsdev.so 00:03:48.588 CC lib/event/app.o 00:03:48.588 CC lib/event/reactor.o 00:03:48.588 CC lib/event/log_rpc.o 00:03:48.588 CC lib/event/app_rpc.o 00:03:48.588 CC lib/event/scheduler_static.o 00:03:48.845 LIB libspdk_accel.a 00:03:48.845 SO libspdk_accel.so.16.0 00:03:48.845 LIB libspdk_nvme.a 00:03:48.845 SYMLINK libspdk_accel.so 00:03:48.845 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:49.123 LIB libspdk_event.a 00:03:49.123 SO libspdk_nvme.so.15.0 00:03:49.123 SO libspdk_event.so.14.0 00:03:49.123 SYMLINK libspdk_event.so 00:03:49.123 CC lib/bdev/bdev_rpc.o 00:03:49.123 CC lib/bdev/bdev_zone.o 00:03:49.123 CC lib/bdev/part.o 00:03:49.123 CC lib/bdev/bdev.o 00:03:49.123 CC lib/bdev/scsi_nvme.o 00:03:49.123 SYMLINK libspdk_nvme.so 00:03:49.391 LIB libspdk_fuse_dispatcher.a 00:03:49.391 SO libspdk_fuse_dispatcher.so.1.0 00:03:49.391 SYMLINK libspdk_fuse_dispatcher.so 00:03:50.378 LIB libspdk_blob.a 00:03:50.378 SO libspdk_blob.so.12.0 00:03:50.378 SYMLINK libspdk_blob.so 00:03:50.655 CC lib/blobfs/blobfs.o 00:03:50.655 CC lib/blobfs/tree.o 00:03:50.655 CC lib/lvol/lvol.o 00:03:50.947 LIB libspdk_bdev.a 00:03:51.222 SO libspdk_bdev.so.17.0 00:03:51.222 LIB libspdk_blobfs.a 00:03:51.222 SYMLINK libspdk_bdev.so 00:03:51.222 SO libspdk_blobfs.so.11.0 00:03:51.222 LIB libspdk_lvol.a 00:03:51.222 SYMLINK libspdk_blobfs.so 00:03:51.222 SO libspdk_lvol.so.11.0 00:03:51.222 SYMLINK libspdk_lvol.so 00:03:51.500 CC lib/ublk/ublk.o 00:03:51.500 CC lib/ublk/ublk_rpc.o 00:03:51.500 CC lib/nvmf/ctrlr.o 00:03:51.500 CC lib/nvmf/ctrlr_discovery.o 00:03:51.500 CC lib/nvmf/ctrlr_bdev.o 00:03:51.500 CC lib/nvmf/nvmf.o 00:03:51.500 CC lib/nvmf/subsystem.o 00:03:51.500 CC lib/nvmf/nvmf_rpc.o 00:03:51.500 CC lib/nvmf/transport.o 00:03:51.500 CC lib/nvmf/mdns_server.o 00:03:51.500 CC lib/nvmf/tcp.o 00:03:51.500 CC lib/nvmf/stubs.o 00:03:51.500 CC lib/nvmf/vfio_user.o 00:03:51.500 CC lib/nvmf/rdma.o 00:03:51.500 CC lib/nvmf/auth.o 00:03:51.500 CC lib/ftl/ftl_init.o 00:03:51.500 CC lib/ftl/ftl_core.o 00:03:51.500 CC lib/ftl/ftl_layout.o 00:03:51.500 CC lib/ftl/ftl_debug.o 00:03:51.500 CC lib/ftl/ftl_io.o 00:03:51.500 CC lib/ftl/ftl_sb.o 00:03:51.500 CC lib/ftl/ftl_l2p.o 00:03:51.500 CC lib/ftl/ftl_l2p_flat.o 00:03:51.500 CC lib/ftl/ftl_nv_cache.o 00:03:51.500 CC lib/ftl/ftl_band.o 00:03:51.500 CC lib/ftl/ftl_band_ops.o 00:03:51.500 CC lib/ftl/ftl_writer.o 00:03:51.500 CC lib/ftl/ftl_rq.o 00:03:51.500 CC lib/ftl/ftl_reloc.o 00:03:51.500 CC lib/ftl/ftl_l2p_cache.o 00:03:51.500 CC lib/scsi/dev.o 00:03:51.500 CC lib/scsi/scsi.o 00:03:51.500 CC lib/ftl/ftl_p2l.o 00:03:51.500 CC lib/scsi/lun.o 00:03:51.500 CC lib/nbd/nbd.o 00:03:51.500 CC lib/scsi/port.o 00:03:51.500 CC lib/ftl/mngt/ftl_mngt.o 00:03:51.500 CC lib/ftl/ftl_p2l_log.o 00:03:51.500 CC lib/nbd/nbd_rpc.o 00:03:51.500 CC lib/scsi/scsi_bdev.o 00:03:51.500 CC lib/scsi/scsi_pr.o 00:03:51.500 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:51.500 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:51.500 CC lib/scsi/scsi_rpc.o 00:03:51.500 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:51.500 CC lib/scsi/task.o 00:03:51.500 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:51.500 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:51.500 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:51.500 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:51.500 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:51.500 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:51.500 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:51.500 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:51.500 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:51.500 CC lib/ftl/utils/ftl_conf.o 00:03:51.500 CC lib/ftl/utils/ftl_md.o 00:03:51.500 CC lib/ftl/utils/ftl_mempool.o 00:03:51.500 CC lib/ftl/utils/ftl_bitmap.o 00:03:51.500 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:51.500 CC lib/ftl/utils/ftl_property.o 00:03:51.500 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:51.500 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:51.500 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:51.500 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:51.500 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:51.500 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:51.500 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:51.500 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:51.500 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:51.500 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:51.500 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:51.500 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:51.501 CC lib/ftl/base/ftl_base_bdev.o 00:03:51.501 CC lib/ftl/base/ftl_base_dev.o 00:03:51.501 CC lib/ftl/ftl_trace.o 00:03:52.437 LIB libspdk_scsi.a 00:03:52.437 LIB libspdk_nbd.a 00:03:52.437 SO libspdk_scsi.so.9.0 00:03:52.437 SO libspdk_nbd.so.7.0 00:03:52.437 SYMLINK libspdk_nbd.so 00:03:52.437 SYMLINK libspdk_scsi.so 00:03:52.437 LIB libspdk_ublk.a 00:03:52.437 SO libspdk_ublk.so.3.0 00:03:52.437 SYMLINK libspdk_ublk.so 00:03:52.437 LIB libspdk_ftl.a 00:03:52.696 CC lib/vhost/vhost.o 00:03:52.696 CC lib/vhost/vhost_rpc.o 00:03:52.696 CC lib/vhost/vhost_scsi.o 00:03:52.696 CC lib/vhost/vhost_blk.o 00:03:52.696 CC lib/vhost/rte_vhost_user.o 00:03:52.696 CC lib/iscsi/conn.o 00:03:52.696 CC lib/iscsi/init_grp.o 00:03:52.696 CC lib/iscsi/iscsi.o 00:03:52.696 CC lib/iscsi/param.o 00:03:52.696 CC lib/iscsi/portal_grp.o 00:03:52.696 CC lib/iscsi/tgt_node.o 00:03:52.696 CC lib/iscsi/iscsi_subsystem.o 00:03:52.696 CC lib/iscsi/iscsi_rpc.o 00:03:52.696 CC lib/iscsi/task.o 00:03:52.696 SO libspdk_ftl.so.9.0 00:03:52.955 SYMLINK libspdk_ftl.so 00:03:53.524 LIB libspdk_nvmf.a 00:03:53.524 SO libspdk_nvmf.so.20.0 00:03:53.524 LIB libspdk_vhost.a 00:03:53.524 SO libspdk_vhost.so.8.0 00:03:53.524 SYMLINK libspdk_nvmf.so 00:03:53.524 SYMLINK libspdk_vhost.so 00:03:53.783 LIB libspdk_iscsi.a 00:03:53.783 SO libspdk_iscsi.so.8.0 00:03:53.783 SYMLINK libspdk_iscsi.so 00:03:54.352 CC module/vfu_device/vfu_virtio.o 00:03:54.352 CC module/vfu_device/vfu_virtio_blk.o 00:03:54.352 CC module/vfu_device/vfu_virtio_scsi.o 00:03:54.352 CC module/vfu_device/vfu_virtio_rpc.o 00:03:54.352 CC module/vfu_device/vfu_virtio_fs.o 00:03:54.352 CC module/env_dpdk/env_dpdk_rpc.o 00:03:54.612 LIB libspdk_env_dpdk_rpc.a 00:03:54.612 CC module/sock/posix/posix.o 00:03:54.612 CC module/accel/iaa/accel_iaa.o 00:03:54.612 CC module/accel/iaa/accel_iaa_rpc.o 00:03:54.612 CC module/accel/error/accel_error.o 00:03:54.612 CC module/accel/error/accel_error_rpc.o 00:03:54.612 CC module/keyring/linux/keyring.o 00:03:54.612 CC module/fsdev/aio/fsdev_aio.o 00:03:54.612 CC module/keyring/linux/keyring_rpc.o 00:03:54.612 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:54.612 CC module/blob/bdev/blob_bdev.o 00:03:54.612 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:54.612 SO libspdk_env_dpdk_rpc.so.6.0 00:03:54.612 CC module/fsdev/aio/linux_aio_mgr.o 00:03:54.612 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:54.612 CC module/scheduler/gscheduler/gscheduler.o 00:03:54.612 CC module/accel/ioat/accel_ioat.o 00:03:54.612 CC module/accel/ioat/accel_ioat_rpc.o 00:03:54.612 CC module/accel/dsa/accel_dsa.o 00:03:54.612 CC module/accel/dsa/accel_dsa_rpc.o 00:03:54.612 CC module/keyring/file/keyring.o 00:03:54.612 CC module/keyring/file/keyring_rpc.o 00:03:54.612 SYMLINK libspdk_env_dpdk_rpc.so 00:03:54.871 LIB libspdk_scheduler_dpdk_governor.a 00:03:54.871 LIB libspdk_keyring_linux.a 00:03:54.871 LIB libspdk_scheduler_gscheduler.a 00:03:54.871 LIB libspdk_keyring_file.a 00:03:54.871 LIB libspdk_scheduler_dynamic.a 00:03:54.871 SO libspdk_keyring_linux.so.1.0 00:03:54.871 SO libspdk_scheduler_gscheduler.so.4.0 00:03:54.871 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:54.871 SO libspdk_keyring_file.so.2.0 00:03:54.871 LIB libspdk_accel_ioat.a 00:03:54.871 LIB libspdk_accel_iaa.a 00:03:54.871 SO libspdk_scheduler_dynamic.so.4.0 00:03:54.871 LIB libspdk_accel_error.a 00:03:54.871 SO libspdk_accel_ioat.so.6.0 00:03:54.871 SO libspdk_accel_iaa.so.3.0 00:03:54.871 SO libspdk_accel_error.so.2.0 00:03:54.871 LIB libspdk_blob_bdev.a 00:03:54.871 SYMLINK libspdk_scheduler_gscheduler.so 00:03:54.871 SYMLINK libspdk_keyring_linux.so 00:03:54.871 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:54.871 SYMLINK libspdk_scheduler_dynamic.so 00:03:54.871 SYMLINK libspdk_keyring_file.so 00:03:54.871 LIB libspdk_accel_dsa.a 00:03:54.871 SYMLINK libspdk_accel_ioat.so 00:03:54.871 SO libspdk_blob_bdev.so.12.0 00:03:54.871 SYMLINK libspdk_accel_iaa.so 00:03:54.871 SYMLINK libspdk_accel_error.so 00:03:54.871 LIB libspdk_vfu_device.a 00:03:54.871 SO libspdk_accel_dsa.so.5.0 00:03:55.128 SYMLINK libspdk_blob_bdev.so 00:03:55.128 SO libspdk_vfu_device.so.3.0 00:03:55.128 SYMLINK libspdk_accel_dsa.so 00:03:55.128 SYMLINK libspdk_vfu_device.so 00:03:55.128 LIB libspdk_fsdev_aio.a 00:03:55.128 LIB libspdk_sock_posix.a 00:03:55.128 SO libspdk_fsdev_aio.so.1.0 00:03:55.387 SO libspdk_sock_posix.so.6.0 00:03:55.387 SYMLINK libspdk_fsdev_aio.so 00:03:55.387 SYMLINK libspdk_sock_posix.so 00:03:55.387 CC module/bdev/gpt/vbdev_gpt.o 00:03:55.387 CC module/bdev/gpt/gpt.o 00:03:55.387 CC module/bdev/passthru/vbdev_passthru.o 00:03:55.647 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:55.647 CC module/bdev/error/vbdev_error.o 00:03:55.647 CC module/bdev/error/vbdev_error_rpc.o 00:03:55.647 CC module/bdev/delay/vbdev_delay.o 00:03:55.647 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:55.647 CC module/bdev/lvol/vbdev_lvol.o 00:03:55.647 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:55.647 CC module/bdev/nvme/bdev_nvme.o 00:03:55.647 CC module/bdev/malloc/bdev_malloc.o 00:03:55.647 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:55.647 CC module/bdev/nvme/nvme_rpc.o 00:03:55.647 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:55.647 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:55.647 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:55.647 CC module/bdev/ftl/bdev_ftl.o 00:03:55.647 CC module/bdev/nvme/bdev_mdns_client.o 00:03:55.647 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:55.647 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:55.647 CC module/bdev/nvme/vbdev_opal.o 00:03:55.647 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:55.647 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:55.647 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:55.647 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:55.647 CC module/bdev/split/vbdev_split.o 00:03:55.647 CC module/bdev/raid/bdev_raid.o 00:03:55.647 CC module/bdev/split/vbdev_split_rpc.o 00:03:55.647 CC module/bdev/raid/bdev_raid_rpc.o 00:03:55.647 CC module/bdev/null/bdev_null.o 00:03:55.647 CC module/bdev/raid/bdev_raid_sb.o 00:03:55.647 CC module/bdev/raid/raid0.o 00:03:55.647 CC module/bdev/raid/raid1.o 00:03:55.647 CC module/bdev/iscsi/bdev_iscsi.o 00:03:55.647 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:55.647 CC module/bdev/aio/bdev_aio.o 00:03:55.647 CC module/bdev/null/bdev_null_rpc.o 00:03:55.647 CC module/bdev/raid/concat.o 00:03:55.647 CC module/bdev/aio/bdev_aio_rpc.o 00:03:55.647 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:55.647 CC module/blobfs/bdev/blobfs_bdev.o 00:03:55.906 LIB libspdk_bdev_gpt.a 00:03:55.906 LIB libspdk_bdev_error.a 00:03:55.906 LIB libspdk_blobfs_bdev.a 00:03:55.906 LIB libspdk_bdev_split.a 00:03:55.906 SO libspdk_bdev_error.so.6.0 00:03:55.906 LIB libspdk_bdev_null.a 00:03:55.906 SO libspdk_blobfs_bdev.so.6.0 00:03:55.906 LIB libspdk_bdev_ftl.a 00:03:55.906 SO libspdk_bdev_gpt.so.6.0 00:03:55.906 SO libspdk_bdev_split.so.6.0 00:03:55.906 SO libspdk_bdev_ftl.so.6.0 00:03:55.906 SO libspdk_bdev_null.so.6.0 00:03:55.906 LIB libspdk_bdev_passthru.a 00:03:55.906 SYMLINK libspdk_blobfs_bdev.so 00:03:55.906 LIB libspdk_bdev_aio.a 00:03:55.906 SYMLINK libspdk_bdev_error.so 00:03:55.906 SYMLINK libspdk_bdev_gpt.so 00:03:55.906 LIB libspdk_bdev_malloc.a 00:03:55.906 SO libspdk_bdev_passthru.so.6.0 00:03:55.906 SYMLINK libspdk_bdev_split.so 00:03:55.906 LIB libspdk_bdev_zone_block.a 00:03:55.906 SYMLINK libspdk_bdev_null.so 00:03:55.906 SO libspdk_bdev_aio.so.6.0 00:03:55.906 SYMLINK libspdk_bdev_ftl.so 00:03:55.906 LIB libspdk_bdev_delay.a 00:03:55.906 SO libspdk_bdev_malloc.so.6.0 00:03:55.906 LIB libspdk_bdev_iscsi.a 00:03:55.906 SO libspdk_bdev_zone_block.so.6.0 00:03:55.906 SO libspdk_bdev_delay.so.6.0 00:03:55.906 SYMLINK libspdk_bdev_passthru.so 00:03:55.906 SYMLINK libspdk_bdev_aio.so 00:03:56.166 SO libspdk_bdev_iscsi.so.6.0 00:03:56.166 SYMLINK libspdk_bdev_malloc.so 00:03:56.166 LIB libspdk_bdev_lvol.a 00:03:56.166 SYMLINK libspdk_bdev_zone_block.so 00:03:56.166 LIB libspdk_bdev_virtio.a 00:03:56.166 SYMLINK libspdk_bdev_delay.so 00:03:56.166 SO libspdk_bdev_lvol.so.6.0 00:03:56.166 SYMLINK libspdk_bdev_iscsi.so 00:03:56.166 SO libspdk_bdev_virtio.so.6.0 00:03:56.166 SYMLINK libspdk_bdev_lvol.so 00:03:56.166 SYMLINK libspdk_bdev_virtio.so 00:03:56.425 LIB libspdk_bdev_raid.a 00:03:56.425 SO libspdk_bdev_raid.so.6.0 00:03:56.684 SYMLINK libspdk_bdev_raid.so 00:03:57.625 LIB libspdk_bdev_nvme.a 00:03:57.625 SO libspdk_bdev_nvme.so.7.1 00:03:57.625 SYMLINK libspdk_bdev_nvme.so 00:03:58.563 CC module/event/subsystems/vmd/vmd.o 00:03:58.563 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:58.563 CC module/event/subsystems/keyring/keyring.o 00:03:58.563 CC module/event/subsystems/iobuf/iobuf.o 00:03:58.563 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:58.563 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:58.563 CC module/event/subsystems/sock/sock.o 00:03:58.563 CC module/event/subsystems/fsdev/fsdev.o 00:03:58.563 CC module/event/subsystems/scheduler/scheduler.o 00:03:58.563 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:58.563 LIB libspdk_event_keyring.a 00:03:58.563 LIB libspdk_event_vhost_blk.a 00:03:58.563 LIB libspdk_event_sock.a 00:03:58.563 LIB libspdk_event_vmd.a 00:03:58.563 LIB libspdk_event_vfu_tgt.a 00:03:58.563 LIB libspdk_event_fsdev.a 00:03:58.563 SO libspdk_event_keyring.so.1.0 00:03:58.563 LIB libspdk_event_scheduler.a 00:03:58.563 LIB libspdk_event_iobuf.a 00:03:58.563 SO libspdk_event_vhost_blk.so.3.0 00:03:58.563 SO libspdk_event_vmd.so.6.0 00:03:58.563 SO libspdk_event_sock.so.5.0 00:03:58.563 SO libspdk_event_scheduler.so.4.0 00:03:58.563 SO libspdk_event_vfu_tgt.so.3.0 00:03:58.563 SO libspdk_event_fsdev.so.1.0 00:03:58.563 SO libspdk_event_iobuf.so.3.0 00:03:58.563 SYMLINK libspdk_event_keyring.so 00:03:58.563 SYMLINK libspdk_event_vmd.so 00:03:58.563 SYMLINK libspdk_event_vhost_blk.so 00:03:58.563 SYMLINK libspdk_event_sock.so 00:03:58.563 SYMLINK libspdk_event_scheduler.so 00:03:58.563 SYMLINK libspdk_event_fsdev.so 00:03:58.563 SYMLINK libspdk_event_vfu_tgt.so 00:03:58.563 SYMLINK libspdk_event_iobuf.so 00:03:59.132 CC module/event/subsystems/accel/accel.o 00:03:59.132 LIB libspdk_event_accel.a 00:03:59.132 SO libspdk_event_accel.so.6.0 00:03:59.132 SYMLINK libspdk_event_accel.so 00:03:59.699 CC module/event/subsystems/bdev/bdev.o 00:03:59.699 LIB libspdk_event_bdev.a 00:03:59.699 SO libspdk_event_bdev.so.6.0 00:03:59.699 SYMLINK libspdk_event_bdev.so 00:04:00.266 CC module/event/subsystems/ublk/ublk.o 00:04:00.266 CC module/event/subsystems/scsi/scsi.o 00:04:00.266 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:00.266 CC module/event/subsystems/nbd/nbd.o 00:04:00.266 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:00.266 LIB libspdk_event_ublk.a 00:04:00.266 LIB libspdk_event_nbd.a 00:04:00.266 LIB libspdk_event_scsi.a 00:04:00.266 SO libspdk_event_scsi.so.6.0 00:04:00.266 SO libspdk_event_ublk.so.3.0 00:04:00.266 SO libspdk_event_nbd.so.6.0 00:04:00.266 LIB libspdk_event_nvmf.a 00:04:00.266 SYMLINK libspdk_event_scsi.so 00:04:00.266 SYMLINK libspdk_event_ublk.so 00:04:00.266 SYMLINK libspdk_event_nbd.so 00:04:00.525 SO libspdk_event_nvmf.so.6.0 00:04:00.525 SYMLINK libspdk_event_nvmf.so 00:04:00.785 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:00.785 CC module/event/subsystems/iscsi/iscsi.o 00:04:00.785 LIB libspdk_event_vhost_scsi.a 00:04:00.785 SO libspdk_event_vhost_scsi.so.3.0 00:04:00.785 LIB libspdk_event_iscsi.a 00:04:01.044 SO libspdk_event_iscsi.so.6.0 00:04:01.044 SYMLINK libspdk_event_vhost_scsi.so 00:04:01.044 SYMLINK libspdk_event_iscsi.so 00:04:01.303 SO libspdk.so.6.0 00:04:01.303 SYMLINK libspdk.so 00:04:01.562 CXX app/trace/trace.o 00:04:01.562 CC app/spdk_nvme_identify/identify.o 00:04:01.562 CC app/trace_record/trace_record.o 00:04:01.562 CC app/spdk_nvme_discover/discovery_aer.o 00:04:01.562 CC app/spdk_top/spdk_top.o 00:04:01.562 CC app/spdk_nvme_perf/perf.o 00:04:01.562 TEST_HEADER include/spdk/accel.h 00:04:01.562 TEST_HEADER include/spdk/accel_module.h 00:04:01.562 CC test/rpc_client/rpc_client_test.o 00:04:01.562 CC app/spdk_lspci/spdk_lspci.o 00:04:01.562 TEST_HEADER include/spdk/barrier.h 00:04:01.562 TEST_HEADER include/spdk/assert.h 00:04:01.562 TEST_HEADER include/spdk/base64.h 00:04:01.562 TEST_HEADER include/spdk/bdev.h 00:04:01.562 TEST_HEADER include/spdk/bdev_module.h 00:04:01.562 TEST_HEADER include/spdk/bdev_zone.h 00:04:01.562 TEST_HEADER include/spdk/bit_array.h 00:04:01.562 TEST_HEADER include/spdk/blobfs.h 00:04:01.562 TEST_HEADER include/spdk/bit_pool.h 00:04:01.562 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:01.562 TEST_HEADER include/spdk/blob_bdev.h 00:04:01.562 TEST_HEADER include/spdk/blob.h 00:04:01.562 TEST_HEADER include/spdk/conf.h 00:04:01.562 TEST_HEADER include/spdk/config.h 00:04:01.562 TEST_HEADER include/spdk/crc16.h 00:04:01.562 TEST_HEADER include/spdk/cpuset.h 00:04:01.562 TEST_HEADER include/spdk/crc64.h 00:04:01.562 TEST_HEADER include/spdk/crc32.h 00:04:01.562 TEST_HEADER include/spdk/dif.h 00:04:01.562 TEST_HEADER include/spdk/dma.h 00:04:01.562 TEST_HEADER include/spdk/event.h 00:04:01.562 TEST_HEADER include/spdk/endian.h 00:04:01.562 TEST_HEADER include/spdk/env.h 00:04:01.562 TEST_HEADER include/spdk/env_dpdk.h 00:04:01.562 TEST_HEADER include/spdk/fd_group.h 00:04:01.562 TEST_HEADER include/spdk/fd.h 00:04:01.562 TEST_HEADER include/spdk/fsdev.h 00:04:01.562 TEST_HEADER include/spdk/file.h 00:04:01.562 TEST_HEADER include/spdk/fsdev_module.h 00:04:01.562 TEST_HEADER include/spdk/ftl.h 00:04:01.562 TEST_HEADER include/spdk/gpt_spec.h 00:04:01.562 TEST_HEADER include/spdk/hexlify.h 00:04:01.562 CC app/spdk_dd/spdk_dd.o 00:04:01.562 TEST_HEADER include/spdk/idxd_spec.h 00:04:01.562 TEST_HEADER include/spdk/histogram_data.h 00:04:01.562 TEST_HEADER include/spdk/init.h 00:04:01.562 TEST_HEADER include/spdk/idxd.h 00:04:01.562 TEST_HEADER include/spdk/ioat_spec.h 00:04:01.562 TEST_HEADER include/spdk/ioat.h 00:04:01.562 TEST_HEADER include/spdk/iscsi_spec.h 00:04:01.562 TEST_HEADER include/spdk/keyring.h 00:04:01.562 TEST_HEADER include/spdk/json.h 00:04:01.562 TEST_HEADER include/spdk/jsonrpc.h 00:04:01.562 TEST_HEADER include/spdk/keyring_module.h 00:04:01.562 TEST_HEADER include/spdk/log.h 00:04:01.562 TEST_HEADER include/spdk/likely.h 00:04:01.562 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:01.562 TEST_HEADER include/spdk/lvol.h 00:04:01.562 TEST_HEADER include/spdk/md5.h 00:04:01.562 TEST_HEADER include/spdk/mmio.h 00:04:01.562 CC app/nvmf_tgt/nvmf_main.o 00:04:01.562 TEST_HEADER include/spdk/memory.h 00:04:01.562 TEST_HEADER include/spdk/notify.h 00:04:01.562 TEST_HEADER include/spdk/net.h 00:04:01.562 TEST_HEADER include/spdk/nvme_intel.h 00:04:01.562 TEST_HEADER include/spdk/nbd.h 00:04:01.562 TEST_HEADER include/spdk/nvme.h 00:04:01.562 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:01.563 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:01.563 TEST_HEADER include/spdk/nvme_spec.h 00:04:01.563 TEST_HEADER include/spdk/nvme_zns.h 00:04:01.563 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:01.563 TEST_HEADER include/spdk/nvmf.h 00:04:01.563 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:01.563 TEST_HEADER include/spdk/nvmf_transport.h 00:04:01.563 TEST_HEADER include/spdk/opal_spec.h 00:04:01.563 TEST_HEADER include/spdk/nvmf_spec.h 00:04:01.563 TEST_HEADER include/spdk/opal.h 00:04:01.563 CC app/iscsi_tgt/iscsi_tgt.o 00:04:01.563 TEST_HEADER include/spdk/pipe.h 00:04:01.563 TEST_HEADER include/spdk/pci_ids.h 00:04:01.563 TEST_HEADER include/spdk/queue.h 00:04:01.563 TEST_HEADER include/spdk/reduce.h 00:04:01.563 TEST_HEADER include/spdk/rpc.h 00:04:01.563 TEST_HEADER include/spdk/scheduler.h 00:04:01.563 CC app/spdk_tgt/spdk_tgt.o 00:04:01.563 TEST_HEADER include/spdk/scsi.h 00:04:01.563 TEST_HEADER include/spdk/scsi_spec.h 00:04:01.563 TEST_HEADER include/spdk/sock.h 00:04:01.563 TEST_HEADER include/spdk/string.h 00:04:01.563 TEST_HEADER include/spdk/thread.h 00:04:01.563 TEST_HEADER include/spdk/stdinc.h 00:04:01.563 TEST_HEADER include/spdk/trace.h 00:04:01.563 TEST_HEADER include/spdk/tree.h 00:04:01.563 TEST_HEADER include/spdk/trace_parser.h 00:04:01.563 TEST_HEADER include/spdk/ublk.h 00:04:01.563 TEST_HEADER include/spdk/util.h 00:04:01.563 TEST_HEADER include/spdk/uuid.h 00:04:01.563 TEST_HEADER include/spdk/version.h 00:04:01.563 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:01.563 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:01.563 TEST_HEADER include/spdk/vhost.h 00:04:01.563 TEST_HEADER include/spdk/vmd.h 00:04:01.563 TEST_HEADER include/spdk/zipf.h 00:04:01.563 TEST_HEADER include/spdk/xor.h 00:04:01.563 CXX test/cpp_headers/accel.o 00:04:01.563 CXX test/cpp_headers/assert.o 00:04:01.563 CXX test/cpp_headers/accel_module.o 00:04:01.563 CXX test/cpp_headers/base64.o 00:04:01.563 CXX test/cpp_headers/bdev_module.o 00:04:01.563 CXX test/cpp_headers/barrier.o 00:04:01.563 CXX test/cpp_headers/bdev_zone.o 00:04:01.563 CXX test/cpp_headers/bdev.o 00:04:01.563 CXX test/cpp_headers/blob_bdev.o 00:04:01.563 CXX test/cpp_headers/bit_array.o 00:04:01.563 CXX test/cpp_headers/bit_pool.o 00:04:01.563 CXX test/cpp_headers/blobfs_bdev.o 00:04:01.563 CXX test/cpp_headers/blobfs.o 00:04:01.563 CXX test/cpp_headers/blob.o 00:04:01.563 CXX test/cpp_headers/config.o 00:04:01.563 CXX test/cpp_headers/conf.o 00:04:01.563 CXX test/cpp_headers/cpuset.o 00:04:01.563 CXX test/cpp_headers/crc16.o 00:04:01.563 CXX test/cpp_headers/crc32.o 00:04:01.563 CXX test/cpp_headers/crc64.o 00:04:01.563 CXX test/cpp_headers/dif.o 00:04:01.563 CXX test/cpp_headers/dma.o 00:04:01.563 CXX test/cpp_headers/env_dpdk.o 00:04:01.563 CXX test/cpp_headers/endian.o 00:04:01.563 CXX test/cpp_headers/env.o 00:04:01.563 CXX test/cpp_headers/event.o 00:04:01.563 CXX test/cpp_headers/fd_group.o 00:04:01.563 CXX test/cpp_headers/fd.o 00:04:01.829 CXX test/cpp_headers/file.o 00:04:01.829 CXX test/cpp_headers/fsdev.o 00:04:01.829 CXX test/cpp_headers/fsdev_module.o 00:04:01.829 CXX test/cpp_headers/hexlify.o 00:04:01.829 CXX test/cpp_headers/ftl.o 00:04:01.829 CXX test/cpp_headers/gpt_spec.o 00:04:01.829 CXX test/cpp_headers/histogram_data.o 00:04:01.829 CXX test/cpp_headers/idxd_spec.o 00:04:01.829 CXX test/cpp_headers/idxd.o 00:04:01.829 CXX test/cpp_headers/init.o 00:04:01.829 CXX test/cpp_headers/ioat.o 00:04:01.829 CXX test/cpp_headers/ioat_spec.o 00:04:01.829 CXX test/cpp_headers/iscsi_spec.o 00:04:01.829 CXX test/cpp_headers/jsonrpc.o 00:04:01.829 CXX test/cpp_headers/json.o 00:04:01.829 CXX test/cpp_headers/keyring.o 00:04:01.829 CXX test/cpp_headers/keyring_module.o 00:04:01.829 CXX test/cpp_headers/log.o 00:04:01.829 CXX test/cpp_headers/likely.o 00:04:01.829 CXX test/cpp_headers/lvol.o 00:04:01.829 CXX test/cpp_headers/memory.o 00:04:01.829 CXX test/cpp_headers/md5.o 00:04:01.829 CXX test/cpp_headers/mmio.o 00:04:01.829 CXX test/cpp_headers/nbd.o 00:04:01.829 CXX test/cpp_headers/net.o 00:04:01.829 CXX test/cpp_headers/notify.o 00:04:01.829 CXX test/cpp_headers/nvme.o 00:04:01.829 CXX test/cpp_headers/nvme_intel.o 00:04:01.829 CXX test/cpp_headers/nvme_ocssd.o 00:04:01.829 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:01.829 CXX test/cpp_headers/nvme_spec.o 00:04:01.829 CXX test/cpp_headers/nvme_zns.o 00:04:01.829 CXX test/cpp_headers/nvmf_cmd.o 00:04:01.829 CXX test/cpp_headers/nvmf.o 00:04:01.829 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:01.829 CXX test/cpp_headers/nvmf_spec.o 00:04:01.829 CXX test/cpp_headers/opal.o 00:04:01.829 CXX test/cpp_headers/nvmf_transport.o 00:04:01.829 CXX test/cpp_headers/opal_spec.o 00:04:01.829 CC examples/ioat/perf/perf.o 00:04:01.829 CC examples/ioat/verify/verify.o 00:04:01.829 CXX test/cpp_headers/pci_ids.o 00:04:01.829 CC test/app/jsoncat/jsoncat.o 00:04:01.829 CC test/app/histogram_perf/histogram_perf.o 00:04:01.829 CC test/thread/poller_perf/poller_perf.o 00:04:01.829 CC test/env/memory/memory_ut.o 00:04:01.830 CC examples/util/zipf/zipf.o 00:04:01.830 CC test/dma/test_dma/test_dma.o 00:04:01.830 CC app/fio/nvme/fio_plugin.o 00:04:01.830 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:01.830 CC test/env/vtophys/vtophys.o 00:04:01.830 CC test/app/stub/stub.o 00:04:01.830 CC test/app/bdev_svc/bdev_svc.o 00:04:01.830 CC test/env/pci/pci_ut.o 00:04:02.113 CC app/fio/bdev/fio_plugin.o 00:04:02.113 LINK rpc_client_test 00:04:02.113 LINK spdk_lspci 00:04:02.113 LINK nvmf_tgt 00:04:02.113 LINK spdk_tgt 00:04:02.113 LINK iscsi_tgt 00:04:02.378 LINK spdk_trace_record 00:04:02.378 CC test/env/mem_callbacks/mem_callbacks.o 00:04:02.378 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:02.378 LINK spdk_nvme_discover 00:04:02.378 LINK jsoncat 00:04:02.378 CXX test/cpp_headers/pipe.o 00:04:02.378 LINK histogram_perf 00:04:02.378 CXX test/cpp_headers/queue.o 00:04:02.378 LINK interrupt_tgt 00:04:02.378 CXX test/cpp_headers/rpc.o 00:04:02.378 CXX test/cpp_headers/scheduler.o 00:04:02.378 CXX test/cpp_headers/scsi.o 00:04:02.378 CXX test/cpp_headers/reduce.o 00:04:02.378 CXX test/cpp_headers/sock.o 00:04:02.378 CXX test/cpp_headers/scsi_spec.o 00:04:02.378 CXX test/cpp_headers/string.o 00:04:02.378 CXX test/cpp_headers/stdinc.o 00:04:02.378 LINK poller_perf 00:04:02.378 CXX test/cpp_headers/trace.o 00:04:02.378 CXX test/cpp_headers/trace_parser.o 00:04:02.378 CXX test/cpp_headers/thread.o 00:04:02.378 CXX test/cpp_headers/tree.o 00:04:02.378 CXX test/cpp_headers/ublk.o 00:04:02.378 CXX test/cpp_headers/uuid.o 00:04:02.378 CXX test/cpp_headers/util.o 00:04:02.378 CXX test/cpp_headers/version.o 00:04:02.378 CXX test/cpp_headers/vfio_user_pci.o 00:04:02.378 CXX test/cpp_headers/vfio_user_spec.o 00:04:02.378 CXX test/cpp_headers/vhost.o 00:04:02.378 CXX test/cpp_headers/vmd.o 00:04:02.378 CXX test/cpp_headers/xor.o 00:04:02.378 CXX test/cpp_headers/zipf.o 00:04:02.378 LINK spdk_dd 00:04:02.378 LINK ioat_perf 00:04:02.638 LINK vtophys 00:04:02.638 LINK zipf 00:04:02.638 LINK env_dpdk_post_init 00:04:02.638 LINK stub 00:04:02.638 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:02.638 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:02.638 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:02.638 LINK verify 00:04:02.638 LINK bdev_svc 00:04:02.638 LINK spdk_trace 00:04:02.897 LINK pci_ut 00:04:02.897 LINK nvme_fuzz 00:04:02.897 LINK spdk_nvme_identify 00:04:02.897 LINK spdk_nvme_perf 00:04:02.897 LINK test_dma 00:04:03.156 CC test/event/event_perf/event_perf.o 00:04:03.156 LINK spdk_nvme 00:04:03.156 CC test/event/reactor/reactor.o 00:04:03.156 CC test/event/reactor_perf/reactor_perf.o 00:04:03.156 LINK spdk_bdev 00:04:03.156 LINK vhost_fuzz 00:04:03.156 LINK mem_callbacks 00:04:03.156 CC test/event/app_repeat/app_repeat.o 00:04:03.156 CC test/event/scheduler/scheduler.o 00:04:03.156 CC app/vhost/vhost.o 00:04:03.156 CC examples/vmd/led/led.o 00:04:03.156 LINK spdk_top 00:04:03.156 CC examples/vmd/lsvmd/lsvmd.o 00:04:03.156 CC examples/idxd/perf/perf.o 00:04:03.156 CC examples/sock/hello_world/hello_sock.o 00:04:03.156 CC examples/thread/thread/thread_ex.o 00:04:03.156 LINK reactor 00:04:03.156 LINK event_perf 00:04:03.156 LINK reactor_perf 00:04:03.156 LINK app_repeat 00:04:03.415 LINK lsvmd 00:04:03.415 LINK led 00:04:03.415 LINK vhost 00:04:03.415 LINK scheduler 00:04:03.415 LINK hello_sock 00:04:03.415 LINK thread 00:04:03.415 LINK memory_ut 00:04:03.415 LINK idxd_perf 00:04:03.415 CC test/nvme/overhead/overhead.o 00:04:03.415 CC test/nvme/sgl/sgl.o 00:04:03.415 CC test/nvme/simple_copy/simple_copy.o 00:04:03.415 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:03.415 CC test/nvme/aer/aer.o 00:04:03.415 CC test/nvme/compliance/nvme_compliance.o 00:04:03.415 CC test/nvme/startup/startup.o 00:04:03.415 CC test/nvme/boot_partition/boot_partition.o 00:04:03.415 CC test/nvme/fdp/fdp.o 00:04:03.415 CC test/nvme/e2edp/nvme_dp.o 00:04:03.415 CC test/nvme/connect_stress/connect_stress.o 00:04:03.415 CC test/nvme/reset/reset.o 00:04:03.415 CC test/nvme/fused_ordering/fused_ordering.o 00:04:03.415 CC test/nvme/err_injection/err_injection.o 00:04:03.415 CC test/nvme/cuse/cuse.o 00:04:03.415 CC test/nvme/reserve/reserve.o 00:04:03.674 CC test/accel/dif/dif.o 00:04:03.674 CC test/blobfs/mkfs/mkfs.o 00:04:03.674 CC test/lvol/esnap/esnap.o 00:04:03.674 LINK doorbell_aers 00:04:03.674 LINK startup 00:04:03.674 LINK boot_partition 00:04:03.674 LINK connect_stress 00:04:03.674 LINK err_injection 00:04:03.674 LINK fused_ordering 00:04:03.674 LINK simple_copy 00:04:03.674 LINK reserve 00:04:03.674 LINK sgl 00:04:03.674 LINK mkfs 00:04:03.674 LINK overhead 00:04:03.932 LINK aer 00:04:03.932 LINK nvme_dp 00:04:03.932 LINK reset 00:04:03.932 LINK nvme_compliance 00:04:03.932 LINK fdp 00:04:03.932 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:03.932 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:03.932 CC examples/nvme/reconnect/reconnect.o 00:04:03.932 CC examples/nvme/hello_world/hello_world.o 00:04:03.932 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:03.932 CC examples/nvme/abort/abort.o 00:04:03.932 CC examples/nvme/arbitration/arbitration.o 00:04:03.932 CC examples/nvme/hotplug/hotplug.o 00:04:03.932 CC examples/accel/perf/accel_perf.o 00:04:03.932 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:03.932 CC examples/blob/cli/blobcli.o 00:04:03.932 CC examples/blob/hello_world/hello_blob.o 00:04:04.191 LINK pmr_persistence 00:04:04.191 LINK iscsi_fuzz 00:04:04.191 LINK cmb_copy 00:04:04.191 LINK dif 00:04:04.191 LINK hello_world 00:04:04.191 LINK hotplug 00:04:04.191 LINK arbitration 00:04:04.191 LINK reconnect 00:04:04.191 LINK abort 00:04:04.191 LINK hello_fsdev 00:04:04.191 LINK hello_blob 00:04:04.450 LINK nvme_manage 00:04:04.450 LINK accel_perf 00:04:04.450 LINK blobcli 00:04:04.710 LINK cuse 00:04:04.710 CC test/bdev/bdevio/bdevio.o 00:04:04.969 CC examples/bdev/hello_world/hello_bdev.o 00:04:04.969 CC examples/bdev/bdevperf/bdevperf.o 00:04:04.969 LINK bdevio 00:04:05.228 LINK hello_bdev 00:04:05.488 LINK bdevperf 00:04:06.058 CC examples/nvmf/nvmf/nvmf.o 00:04:06.317 LINK nvmf 00:04:07.255 LINK esnap 00:04:07.515 00:04:07.515 real 0m55.481s 00:04:07.515 user 6m49.554s 00:04:07.515 sys 3m4.194s 00:04:07.515 02:45:22 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:07.515 02:45:22 make -- common/autotest_common.sh@10 -- $ set +x 00:04:07.515 ************************************ 00:04:07.515 END TEST make 00:04:07.515 ************************************ 00:04:07.515 02:45:22 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:07.515 02:45:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:07.515 02:45:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:07.515 02:45:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.515 02:45:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:07.515 02:45:22 -- pm/common@44 -- $ pid=7055 00:04:07.515 02:45:22 -- pm/common@50 -- $ kill -TERM 7055 00:04:07.515 02:45:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.515 02:45:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:07.515 02:45:22 -- pm/common@44 -- $ pid=7056 00:04:07.515 02:45:22 -- pm/common@50 -- $ kill -TERM 7056 00:04:07.515 02:45:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.515 02:45:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:07.515 02:45:22 -- pm/common@44 -- $ pid=7058 00:04:07.515 02:45:22 -- pm/common@50 -- $ kill -TERM 7058 00:04:07.515 02:45:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.515 02:45:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:07.515 02:45:22 -- pm/common@44 -- $ pid=7087 00:04:07.515 02:45:22 -- pm/common@50 -- $ sudo -E kill -TERM 7087 00:04:07.515 02:45:22 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:07.515 02:45:22 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:07.775 02:45:22 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:07.775 02:45:22 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:07.775 02:45:22 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:07.775 02:45:22 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:07.775 02:45:22 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.775 02:45:22 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.775 02:45:22 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.775 02:45:22 -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.775 02:45:22 -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.775 02:45:22 -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.775 02:45:22 -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.775 02:45:22 -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.775 02:45:22 -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.775 02:45:22 -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.775 02:45:22 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.775 02:45:22 -- scripts/common.sh@344 -- # case "$op" in 00:04:07.775 02:45:22 -- scripts/common.sh@345 -- # : 1 00:04:07.775 02:45:22 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.775 02:45:22 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.775 02:45:22 -- scripts/common.sh@365 -- # decimal 1 00:04:07.775 02:45:22 -- scripts/common.sh@353 -- # local d=1 00:04:07.775 02:45:22 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.775 02:45:22 -- scripts/common.sh@355 -- # echo 1 00:04:07.775 02:45:22 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.775 02:45:22 -- scripts/common.sh@366 -- # decimal 2 00:04:07.775 02:45:22 -- scripts/common.sh@353 -- # local d=2 00:04:07.775 02:45:22 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.775 02:45:22 -- scripts/common.sh@355 -- # echo 2 00:04:07.775 02:45:22 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.775 02:45:22 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.775 02:45:22 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.775 02:45:22 -- scripts/common.sh@368 -- # return 0 00:04:07.775 02:45:22 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.775 02:45:22 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:07.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.775 --rc genhtml_branch_coverage=1 00:04:07.775 --rc genhtml_function_coverage=1 00:04:07.775 --rc genhtml_legend=1 00:04:07.775 --rc geninfo_all_blocks=1 00:04:07.775 --rc geninfo_unexecuted_blocks=1 00:04:07.775 00:04:07.775 ' 00:04:07.775 02:45:22 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:07.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.775 --rc genhtml_branch_coverage=1 00:04:07.775 --rc genhtml_function_coverage=1 00:04:07.775 --rc genhtml_legend=1 00:04:07.775 --rc geninfo_all_blocks=1 00:04:07.775 --rc geninfo_unexecuted_blocks=1 00:04:07.775 00:04:07.775 ' 00:04:07.775 02:45:22 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:07.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.775 --rc genhtml_branch_coverage=1 00:04:07.775 --rc genhtml_function_coverage=1 00:04:07.775 --rc genhtml_legend=1 00:04:07.775 --rc geninfo_all_blocks=1 00:04:07.775 --rc geninfo_unexecuted_blocks=1 00:04:07.775 00:04:07.775 ' 00:04:07.775 02:45:22 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:07.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.775 --rc genhtml_branch_coverage=1 00:04:07.775 --rc genhtml_function_coverage=1 00:04:07.775 --rc genhtml_legend=1 00:04:07.775 --rc geninfo_all_blocks=1 00:04:07.775 --rc geninfo_unexecuted_blocks=1 00:04:07.775 00:04:07.775 ' 00:04:07.775 02:45:22 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:07.775 02:45:22 -- nvmf/common.sh@7 -- # uname -s 00:04:07.775 02:45:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:07.775 02:45:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:07.775 02:45:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:07.775 02:45:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:07.775 02:45:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:07.775 02:45:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:07.775 02:45:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:07.775 02:45:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:07.775 02:45:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:07.775 02:45:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:07.775 02:45:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:07.775 02:45:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:07.775 02:45:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:07.775 02:45:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:07.775 02:45:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:07.775 02:45:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:07.775 02:45:22 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:07.775 02:45:22 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:07.775 02:45:22 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:07.775 02:45:22 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:07.775 02:45:22 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:07.775 02:45:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.775 02:45:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.775 02:45:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.775 02:45:22 -- paths/export.sh@5 -- # export PATH 00:04:07.775 02:45:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.775 02:45:22 -- nvmf/common.sh@51 -- # : 0 00:04:07.775 02:45:22 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:07.775 02:45:22 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:07.775 02:45:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:07.775 02:45:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:07.775 02:45:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:07.775 02:45:22 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:07.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:07.775 02:45:22 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:07.775 02:45:22 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:07.775 02:45:22 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:07.775 02:45:22 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:07.775 02:45:22 -- spdk/autotest.sh@32 -- # uname -s 00:04:07.775 02:45:22 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:07.775 02:45:22 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:07.775 02:45:22 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:07.775 02:45:22 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:07.775 02:45:22 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:07.775 02:45:22 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:07.775 02:45:22 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:07.775 02:45:22 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:07.775 02:45:22 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:07.775 02:45:22 -- spdk/autotest.sh@48 -- # udevadm_pid=88056 00:04:07.775 02:45:22 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:07.775 02:45:22 -- pm/common@17 -- # local monitor 00:04:07.775 02:45:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.775 02:45:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.775 02:45:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.775 02:45:22 -- pm/common@21 -- # date +%s 00:04:07.775 02:45:22 -- pm/common@21 -- # date +%s 00:04:07.775 02:45:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.775 02:45:22 -- pm/common@25 -- # sleep 1 00:04:07.775 02:45:22 -- pm/common@21 -- # date +%s 00:04:07.775 02:45:22 -- pm/common@21 -- # date +%s 00:04:07.775 02:45:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734140722 00:04:07.775 02:45:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734140722 00:04:07.776 02:45:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734140722 00:04:07.776 02:45:22 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734140722 00:04:08.035 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734140722_collect-cpu-load.pm.log 00:04:08.035 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734140722_collect-vmstat.pm.log 00:04:08.035 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734140722_collect-cpu-temp.pm.log 00:04:08.035 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734140722_collect-bmc-pm.bmc.pm.log 00:04:08.972 02:45:23 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:08.972 02:45:23 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:08.972 02:45:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:08.972 02:45:23 -- common/autotest_common.sh@10 -- # set +x 00:04:08.972 02:45:23 -- spdk/autotest.sh@59 -- # create_test_list 00:04:08.972 02:45:23 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:08.972 02:45:23 -- common/autotest_common.sh@10 -- # set +x 00:04:08.972 02:45:23 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:08.972 02:45:23 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:08.972 02:45:23 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:08.972 02:45:23 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:08.972 02:45:23 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:08.972 02:45:23 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:08.972 02:45:23 -- common/autotest_common.sh@1457 -- # uname 00:04:08.972 02:45:23 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:08.972 02:45:23 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:08.972 02:45:23 -- common/autotest_common.sh@1477 -- # uname 00:04:08.972 02:45:23 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:08.972 02:45:23 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:08.972 02:45:23 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:08.972 lcov: LCOV version 1.15 00:04:08.972 02:45:24 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:30.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:30.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:34.200 02:45:48 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:34.200 02:45:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:34.200 02:45:48 -- common/autotest_common.sh@10 -- # set +x 00:04:34.200 02:45:48 -- spdk/autotest.sh@78 -- # rm -f 00:04:34.200 02:45:48 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.740 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:36.740 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:36.740 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:36.740 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:36.740 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:36.740 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:36.740 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:36.740 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:37.000 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:37.000 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:37.000 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:37.000 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:37.000 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:37.000 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:37.000 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:37.000 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:37.000 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:37.000 02:45:52 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:37.000 02:45:52 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:37.000 02:45:52 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:37.000 02:45:52 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:37.000 02:45:52 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:37.000 02:45:52 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:37.000 02:45:52 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:37.000 02:45:52 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:04:37.000 02:45:52 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:37.000 02:45:52 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:37.000 02:45:52 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:37.000 02:45:52 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:37.000 02:45:52 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:37.000 02:45:52 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:37.000 02:45:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:37.000 02:45:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:37.000 02:45:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:37.000 02:45:52 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:37.000 02:45:52 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:37.259 No valid GPT data, bailing 00:04:37.259 02:45:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:37.259 02:45:52 -- scripts/common.sh@394 -- # pt= 00:04:37.259 02:45:52 -- scripts/common.sh@395 -- # return 1 00:04:37.259 02:45:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:37.259 1+0 records in 00:04:37.259 1+0 records out 00:04:37.259 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00536202 s, 196 MB/s 00:04:37.259 02:45:52 -- spdk/autotest.sh@105 -- # sync 00:04:37.259 02:45:52 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:37.259 02:45:52 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:37.259 02:45:52 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:42.540 02:45:57 -- spdk/autotest.sh@111 -- # uname -s 00:04:42.540 02:45:57 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:42.540 02:45:57 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:42.540 02:45:57 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:45.834 Hugepages 00:04:45.834 node hugesize free / total 00:04:45.834 node0 1048576kB 0 / 0 00:04:45.834 node0 2048kB 0 / 0 00:04:45.834 node1 1048576kB 0 / 0 00:04:45.834 node1 2048kB 0 / 0 00:04:45.834 00:04:45.834 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:45.834 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:45.834 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:45.834 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:45.834 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:45.834 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:45.834 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:45.834 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:45.834 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:45.834 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:45.834 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:45.834 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:45.834 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:45.834 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:45.834 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:45.834 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:45.834 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:45.834 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:45.834 02:46:00 -- spdk/autotest.sh@117 -- # uname -s 00:04:45.834 02:46:00 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:45.834 02:46:00 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:45.834 02:46:00 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:48.371 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:48.371 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:48.371 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:48.371 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:48.371 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:48.371 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:48.371 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:48.371 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:48.371 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:48.371 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:48.371 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:48.371 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:48.371 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:48.371 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:48.371 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:48.371 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:49.310 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:49.310 02:46:04 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:50.252 02:46:05 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:50.252 02:46:05 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:50.252 02:46:05 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:50.252 02:46:05 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:50.252 02:46:05 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:50.252 02:46:05 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:50.252 02:46:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:50.252 02:46:05 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:50.252 02:46:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:50.515 02:46:05 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:50.515 02:46:05 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:50.515 02:46:05 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.056 Waiting for block devices as requested 00:04:53.317 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:53.317 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:53.317 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:53.579 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:53.579 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:53.579 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:53.839 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:53.839 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:53.839 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:53.839 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:54.099 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:54.099 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:54.099 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:54.359 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:54.359 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:54.359 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:54.359 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:54.620 02:46:09 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:54.620 02:46:09 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:54.620 02:46:09 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:54.620 02:46:09 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:54.620 02:46:09 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:54.620 02:46:09 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:54.620 02:46:09 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:54.620 02:46:09 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:54.620 02:46:09 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:54.620 02:46:09 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:54.620 02:46:09 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:54.620 02:46:09 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:54.620 02:46:09 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:54.620 02:46:09 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:54.620 02:46:09 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:54.620 02:46:09 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:54.620 02:46:09 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:54.620 02:46:09 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:54.620 02:46:09 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:54.620 02:46:09 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:54.620 02:46:09 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:54.620 02:46:09 -- common/autotest_common.sh@1543 -- # continue 00:04:54.620 02:46:09 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:54.620 02:46:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:54.620 02:46:09 -- common/autotest_common.sh@10 -- # set +x 00:04:54.620 02:46:09 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:54.620 02:46:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:54.620 02:46:09 -- common/autotest_common.sh@10 -- # set +x 00:04:54.620 02:46:09 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:57.919 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.919 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.919 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.919 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.919 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.919 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.919 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.919 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:57.919 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.919 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.919 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.919 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.919 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.919 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.919 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.919 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:58.489 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:58.489 02:46:13 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:58.489 02:46:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:58.489 02:46:13 -- common/autotest_common.sh@10 -- # set +x 00:04:58.750 02:46:13 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:58.750 02:46:13 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:58.750 02:46:13 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:58.750 02:46:13 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:58.750 02:46:13 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:58.750 02:46:13 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:58.750 02:46:13 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:58.750 02:46:13 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:58.750 02:46:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:58.750 02:46:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:58.750 02:46:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.750 02:46:13 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:58.750 02:46:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:58.750 02:46:13 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:58.750 02:46:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:58.750 02:46:13 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:58.750 02:46:13 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:58.750 02:46:13 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:58.750 02:46:13 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:58.750 02:46:13 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:58.750 02:46:13 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:58.750 02:46:13 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:58.750 02:46:13 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:58.750 02:46:13 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=102231 00:04:58.750 02:46:13 -- common/autotest_common.sh@1585 -- # waitforlisten 102231 00:04:58.750 02:46:13 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.750 02:46:13 -- common/autotest_common.sh@835 -- # '[' -z 102231 ']' 00:04:58.750 02:46:13 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.750 02:46:13 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.750 02:46:13 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.750 02:46:13 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.750 02:46:13 -- common/autotest_common.sh@10 -- # set +x 00:04:58.750 [2024-12-14 02:46:13.785979] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:58.750 [2024-12-14 02:46:13.786028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102231 ] 00:04:58.750 [2024-12-14 02:46:13.863383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.010 [2024-12-14 02:46:13.886291] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.010 02:46:14 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.010 02:46:14 -- common/autotest_common.sh@868 -- # return 0 00:04:59.010 02:46:14 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:59.010 02:46:14 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:59.010 02:46:14 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:02.301 nvme0n1 00:05:02.301 02:46:17 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:02.301 [2024-12-14 02:46:17.260126] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:02.301 [2024-12-14 02:46:17.260152] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:02.301 request: 00:05:02.301 { 00:05:02.301 "nvme_ctrlr_name": "nvme0", 00:05:02.301 "password": "test", 00:05:02.301 "method": "bdev_nvme_opal_revert", 00:05:02.301 "req_id": 1 00:05:02.301 } 00:05:02.301 Got JSON-RPC error response 00:05:02.301 response: 00:05:02.301 { 00:05:02.301 "code": -32603, 00:05:02.301 "message": "Internal error" 00:05:02.301 } 00:05:02.301 02:46:17 -- common/autotest_common.sh@1591 -- # true 00:05:02.301 02:46:17 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:02.301 02:46:17 -- common/autotest_common.sh@1595 -- # killprocess 102231 00:05:02.301 02:46:17 -- common/autotest_common.sh@954 -- # '[' -z 102231 ']' 00:05:02.301 02:46:17 -- common/autotest_common.sh@958 -- # kill -0 102231 00:05:02.301 02:46:17 -- common/autotest_common.sh@959 -- # uname 00:05:02.301 02:46:17 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.301 02:46:17 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102231 00:05:02.301 02:46:17 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.301 02:46:17 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.301 02:46:17 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102231' 00:05:02.301 killing process with pid 102231 00:05:02.301 02:46:17 -- common/autotest_common.sh@973 -- # kill 102231 00:05:02.301 02:46:17 -- common/autotest_common.sh@978 -- # wait 102231 00:05:04.209 02:46:18 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:04.209 02:46:18 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:04.209 02:46:18 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:04.209 02:46:18 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:04.209 02:46:18 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:04.209 02:46:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.209 02:46:18 -- common/autotest_common.sh@10 -- # set +x 00:05:04.209 02:46:18 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:04.209 02:46:18 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:04.209 02:46:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.209 02:46:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.209 02:46:18 -- common/autotest_common.sh@10 -- # set +x 00:05:04.209 ************************************ 00:05:04.209 START TEST env 00:05:04.209 ************************************ 00:05:04.209 02:46:18 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:04.209 * Looking for test storage... 00:05:04.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:04.209 02:46:19 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:04.209 02:46:19 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:04.209 02:46:19 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:04.209 02:46:19 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:04.209 02:46:19 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.209 02:46:19 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.209 02:46:19 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.209 02:46:19 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.209 02:46:19 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.209 02:46:19 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.209 02:46:19 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.209 02:46:19 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.209 02:46:19 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.209 02:46:19 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.209 02:46:19 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.209 02:46:19 env -- scripts/common.sh@344 -- # case "$op" in 00:05:04.209 02:46:19 env -- scripts/common.sh@345 -- # : 1 00:05:04.209 02:46:19 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.209 02:46:19 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.209 02:46:19 env -- scripts/common.sh@365 -- # decimal 1 00:05:04.209 02:46:19 env -- scripts/common.sh@353 -- # local d=1 00:05:04.209 02:46:19 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.209 02:46:19 env -- scripts/common.sh@355 -- # echo 1 00:05:04.209 02:46:19 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.210 02:46:19 env -- scripts/common.sh@366 -- # decimal 2 00:05:04.210 02:46:19 env -- scripts/common.sh@353 -- # local d=2 00:05:04.210 02:46:19 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.210 02:46:19 env -- scripts/common.sh@355 -- # echo 2 00:05:04.210 02:46:19 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.210 02:46:19 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.210 02:46:19 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.210 02:46:19 env -- scripts/common.sh@368 -- # return 0 00:05:04.210 02:46:19 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.210 02:46:19 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:04.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.210 --rc genhtml_branch_coverage=1 00:05:04.210 --rc genhtml_function_coverage=1 00:05:04.210 --rc genhtml_legend=1 00:05:04.210 --rc geninfo_all_blocks=1 00:05:04.210 --rc geninfo_unexecuted_blocks=1 00:05:04.210 00:05:04.210 ' 00:05:04.210 02:46:19 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:04.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.210 --rc genhtml_branch_coverage=1 00:05:04.210 --rc genhtml_function_coverage=1 00:05:04.210 --rc genhtml_legend=1 00:05:04.210 --rc geninfo_all_blocks=1 00:05:04.210 --rc geninfo_unexecuted_blocks=1 00:05:04.210 00:05:04.210 ' 00:05:04.210 02:46:19 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:04.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.210 --rc genhtml_branch_coverage=1 00:05:04.210 --rc genhtml_function_coverage=1 00:05:04.210 --rc genhtml_legend=1 00:05:04.210 --rc geninfo_all_blocks=1 00:05:04.210 --rc geninfo_unexecuted_blocks=1 00:05:04.210 00:05:04.210 ' 00:05:04.210 02:46:19 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:04.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.210 --rc genhtml_branch_coverage=1 00:05:04.210 --rc genhtml_function_coverage=1 00:05:04.210 --rc genhtml_legend=1 00:05:04.210 --rc geninfo_all_blocks=1 00:05:04.210 --rc geninfo_unexecuted_blocks=1 00:05:04.210 00:05:04.210 ' 00:05:04.210 02:46:19 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:04.210 02:46:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.210 02:46:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.210 02:46:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.210 ************************************ 00:05:04.210 START TEST env_memory 00:05:04.210 ************************************ 00:05:04.210 02:46:19 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:04.210 00:05:04.210 00:05:04.210 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.210 http://cunit.sourceforge.net/ 00:05:04.210 00:05:04.210 00:05:04.210 Suite: memory 00:05:04.210 Test: alloc and free memory map ...[2024-12-14 02:46:19.199328] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:04.210 passed 00:05:04.210 Test: mem map translation ...[2024-12-14 02:46:19.218305] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:04.210 [2024-12-14 02:46:19.218321] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:04.210 [2024-12-14 02:46:19.218370] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:04.210 [2024-12-14 02:46:19.218377] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:04.210 passed 00:05:04.210 Test: mem map registration ...[2024-12-14 02:46:19.255020] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:04.210 [2024-12-14 02:46:19.255042] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:04.210 passed 00:05:04.210 Test: mem map adjacent registrations ...passed 00:05:04.210 00:05:04.210 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.210 suites 1 1 n/a 0 0 00:05:04.210 tests 4 4 4 0 0 00:05:04.210 asserts 152 152 152 0 n/a 00:05:04.210 00:05:04.210 Elapsed time = 0.123 seconds 00:05:04.210 00:05:04.210 real 0m0.132s 00:05:04.210 user 0m0.122s 00:05:04.210 sys 0m0.010s 00:05:04.210 02:46:19 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.210 02:46:19 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:04.210 ************************************ 00:05:04.210 END TEST env_memory 00:05:04.210 ************************************ 00:05:04.210 02:46:19 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:04.210 02:46:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.210 02:46:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.210 02:46:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.471 ************************************ 00:05:04.471 START TEST env_vtophys 00:05:04.471 ************************************ 00:05:04.471 02:46:19 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:04.471 EAL: lib.eal log level changed from notice to debug 00:05:04.471 EAL: Detected lcore 0 as core 0 on socket 0 00:05:04.471 EAL: Detected lcore 1 as core 1 on socket 0 00:05:04.471 EAL: Detected lcore 2 as core 2 on socket 0 00:05:04.471 EAL: Detected lcore 3 as core 3 on socket 0 00:05:04.471 EAL: Detected lcore 4 as core 4 on socket 0 00:05:04.471 EAL: Detected lcore 5 as core 5 on socket 0 00:05:04.471 EAL: Detected lcore 6 as core 6 on socket 0 00:05:04.471 EAL: Detected lcore 7 as core 8 on socket 0 00:05:04.471 EAL: Detected lcore 8 as core 9 on socket 0 00:05:04.471 EAL: Detected lcore 9 as core 10 on socket 0 00:05:04.471 EAL: Detected lcore 10 as core 11 on socket 0 00:05:04.471 EAL: Detected lcore 11 as core 12 on socket 0 00:05:04.471 EAL: Detected lcore 12 as core 13 on socket 0 00:05:04.471 EAL: Detected lcore 13 as core 16 on socket 0 00:05:04.471 EAL: Detected lcore 14 as core 17 on socket 0 00:05:04.471 EAL: Detected lcore 15 as core 18 on socket 0 00:05:04.471 EAL: Detected lcore 16 as core 19 on socket 0 00:05:04.471 EAL: Detected lcore 17 as core 20 on socket 0 00:05:04.471 EAL: Detected lcore 18 as core 21 on socket 0 00:05:04.471 EAL: Detected lcore 19 as core 25 on socket 0 00:05:04.471 EAL: Detected lcore 20 as core 26 on socket 0 00:05:04.471 EAL: Detected lcore 21 as core 27 on socket 0 00:05:04.471 EAL: Detected lcore 22 as core 28 on socket 0 00:05:04.471 EAL: Detected lcore 23 as core 29 on socket 0 00:05:04.471 EAL: Detected lcore 24 as core 0 on socket 1 00:05:04.471 EAL: Detected lcore 25 as core 1 on socket 1 00:05:04.471 EAL: Detected lcore 26 as core 2 on socket 1 00:05:04.471 EAL: Detected lcore 27 as core 3 on socket 1 00:05:04.471 EAL: Detected lcore 28 as core 4 on socket 1 00:05:04.471 EAL: Detected lcore 29 as core 5 on socket 1 00:05:04.471 EAL: Detected lcore 30 as core 6 on socket 1 00:05:04.471 EAL: Detected lcore 31 as core 8 on socket 1 00:05:04.471 EAL: Detected lcore 32 as core 9 on socket 1 00:05:04.471 EAL: Detected lcore 33 as core 10 on socket 1 00:05:04.471 EAL: Detected lcore 34 as core 11 on socket 1 00:05:04.471 EAL: Detected lcore 35 as core 12 on socket 1 00:05:04.471 EAL: Detected lcore 36 as core 13 on socket 1 00:05:04.471 EAL: Detected lcore 37 as core 16 on socket 1 00:05:04.471 EAL: Detected lcore 38 as core 17 on socket 1 00:05:04.471 EAL: Detected lcore 39 as core 18 on socket 1 00:05:04.471 EAL: Detected lcore 40 as core 19 on socket 1 00:05:04.471 EAL: Detected lcore 41 as core 20 on socket 1 00:05:04.471 EAL: Detected lcore 42 as core 21 on socket 1 00:05:04.471 EAL: Detected lcore 43 as core 25 on socket 1 00:05:04.471 EAL: Detected lcore 44 as core 26 on socket 1 00:05:04.471 EAL: Detected lcore 45 as core 27 on socket 1 00:05:04.471 EAL: Detected lcore 46 as core 28 on socket 1 00:05:04.471 EAL: Detected lcore 47 as core 29 on socket 1 00:05:04.471 EAL: Detected lcore 48 as core 0 on socket 0 00:05:04.471 EAL: Detected lcore 49 as core 1 on socket 0 00:05:04.471 EAL: Detected lcore 50 as core 2 on socket 0 00:05:04.471 EAL: Detected lcore 51 as core 3 on socket 0 00:05:04.471 EAL: Detected lcore 52 as core 4 on socket 0 00:05:04.471 EAL: Detected lcore 53 as core 5 on socket 0 00:05:04.471 EAL: Detected lcore 54 as core 6 on socket 0 00:05:04.471 EAL: Detected lcore 55 as core 8 on socket 0 00:05:04.471 EAL: Detected lcore 56 as core 9 on socket 0 00:05:04.471 EAL: Detected lcore 57 as core 10 on socket 0 00:05:04.471 EAL: Detected lcore 58 as core 11 on socket 0 00:05:04.471 EAL: Detected lcore 59 as core 12 on socket 0 00:05:04.471 EAL: Detected lcore 60 as core 13 on socket 0 00:05:04.471 EAL: Detected lcore 61 as core 16 on socket 0 00:05:04.471 EAL: Detected lcore 62 as core 17 on socket 0 00:05:04.471 EAL: Detected lcore 63 as core 18 on socket 0 00:05:04.471 EAL: Detected lcore 64 as core 19 on socket 0 00:05:04.471 EAL: Detected lcore 65 as core 20 on socket 0 00:05:04.471 EAL: Detected lcore 66 as core 21 on socket 0 00:05:04.471 EAL: Detected lcore 67 as core 25 on socket 0 00:05:04.471 EAL: Detected lcore 68 as core 26 on socket 0 00:05:04.471 EAL: Detected lcore 69 as core 27 on socket 0 00:05:04.471 EAL: Detected lcore 70 as core 28 on socket 0 00:05:04.471 EAL: Detected lcore 71 as core 29 on socket 0 00:05:04.471 EAL: Detected lcore 72 as core 0 on socket 1 00:05:04.471 EAL: Detected lcore 73 as core 1 on socket 1 00:05:04.471 EAL: Detected lcore 74 as core 2 on socket 1 00:05:04.471 EAL: Detected lcore 75 as core 3 on socket 1 00:05:04.471 EAL: Detected lcore 76 as core 4 on socket 1 00:05:04.471 EAL: Detected lcore 77 as core 5 on socket 1 00:05:04.471 EAL: Detected lcore 78 as core 6 on socket 1 00:05:04.471 EAL: Detected lcore 79 as core 8 on socket 1 00:05:04.471 EAL: Detected lcore 80 as core 9 on socket 1 00:05:04.471 EAL: Detected lcore 81 as core 10 on socket 1 00:05:04.471 EAL: Detected lcore 82 as core 11 on socket 1 00:05:04.471 EAL: Detected lcore 83 as core 12 on socket 1 00:05:04.471 EAL: Detected lcore 84 as core 13 on socket 1 00:05:04.471 EAL: Detected lcore 85 as core 16 on socket 1 00:05:04.471 EAL: Detected lcore 86 as core 17 on socket 1 00:05:04.471 EAL: Detected lcore 87 as core 18 on socket 1 00:05:04.471 EAL: Detected lcore 88 as core 19 on socket 1 00:05:04.471 EAL: Detected lcore 89 as core 20 on socket 1 00:05:04.471 EAL: Detected lcore 90 as core 21 on socket 1 00:05:04.471 EAL: Detected lcore 91 as core 25 on socket 1 00:05:04.471 EAL: Detected lcore 92 as core 26 on socket 1 00:05:04.471 EAL: Detected lcore 93 as core 27 on socket 1 00:05:04.471 EAL: Detected lcore 94 as core 28 on socket 1 00:05:04.471 EAL: Detected lcore 95 as core 29 on socket 1 00:05:04.471 EAL: Maximum logical cores by configuration: 128 00:05:04.471 EAL: Detected CPU lcores: 96 00:05:04.471 EAL: Detected NUMA nodes: 2 00:05:04.471 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:04.471 EAL: Detected shared linkage of DPDK 00:05:04.471 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:04.472 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:04.472 EAL: Registered [vdev] bus. 00:05:04.472 EAL: bus.vdev log level changed from disabled to notice 00:05:04.472 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:04.472 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:04.472 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:04.472 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:04.472 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:04.472 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:04.472 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:04.472 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:04.472 EAL: No shared files mode enabled, IPC will be disabled 00:05:04.472 EAL: No shared files mode enabled, IPC is disabled 00:05:04.472 EAL: Bus pci wants IOVA as 'DC' 00:05:04.472 EAL: Bus vdev wants IOVA as 'DC' 00:05:04.472 EAL: Buses did not request a specific IOVA mode. 00:05:04.472 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:04.472 EAL: Selected IOVA mode 'VA' 00:05:04.472 EAL: Probing VFIO support... 00:05:04.472 EAL: IOMMU type 1 (Type 1) is supported 00:05:04.472 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:04.472 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:04.472 EAL: VFIO support initialized 00:05:04.472 EAL: Ask a virtual area of 0x2e000 bytes 00:05:04.472 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:04.472 EAL: Setting up physically contiguous memory... 00:05:04.472 EAL: Setting maximum number of open files to 524288 00:05:04.472 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:04.472 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:04.472 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:04.472 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.472 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:04.472 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.472 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.472 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:04.472 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:04.472 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.472 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:04.472 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.472 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.472 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:04.472 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:04.472 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.472 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:04.472 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.472 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.472 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:04.472 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:04.472 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.472 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:04.472 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.472 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.472 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:04.472 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:04.472 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:04.472 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.472 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:04.472 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.472 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.472 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:04.472 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:04.472 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.472 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:04.472 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.472 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.472 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:04.472 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:04.472 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.472 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:04.472 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.472 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.472 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:04.472 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:04.472 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.472 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:04.472 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.472 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.472 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:04.472 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:04.472 EAL: Hugepages will be freed exactly as allocated. 00:05:04.472 EAL: No shared files mode enabled, IPC is disabled 00:05:04.472 EAL: No shared files mode enabled, IPC is disabled 00:05:04.472 EAL: TSC frequency is ~2100000 KHz 00:05:04.472 EAL: Main lcore 0 is ready (tid=7f5ab3b98a00;cpuset=[0]) 00:05:04.472 EAL: Trying to obtain current memory policy. 00:05:04.472 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.472 EAL: Restoring previous memory policy: 0 00:05:04.472 EAL: request: mp_malloc_sync 00:05:04.472 EAL: No shared files mode enabled, IPC is disabled 00:05:04.472 EAL: Heap on socket 0 was expanded by 2MB 00:05:04.472 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:05:04.472 EAL: probe driver: 8086:37d2 net_i40e 00:05:04.472 EAL: Not managed by a supported kernel driver, skipped 00:05:04.472 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:05:04.472 EAL: probe driver: 8086:37d2 net_i40e 00:05:04.472 EAL: Not managed by a supported kernel driver, skipped 00:05:04.472 EAL: No shared files mode enabled, IPC is disabled 00:05:04.472 EAL: No shared files mode enabled, IPC is disabled 00:05:04.472 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:04.472 EAL: Mem event callback 'spdk:(nil)' registered 00:05:04.472 00:05:04.472 00:05:04.472 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.472 http://cunit.sourceforge.net/ 00:05:04.472 00:05:04.472 00:05:04.472 Suite: components_suite 00:05:04.472 Test: vtophys_malloc_test ...passed 00:05:04.472 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:04.472 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.472 EAL: Restoring previous memory policy: 4 00:05:04.472 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.472 EAL: request: mp_malloc_sync 00:05:04.472 EAL: No shared files mode enabled, IPC is disabled 00:05:04.472 EAL: Heap on socket 0 was expanded by 4MB 00:05:04.472 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.472 EAL: request: mp_malloc_sync 00:05:04.472 EAL: No shared files mode enabled, IPC is disabled 00:05:04.472 EAL: Heap on socket 0 was shrunk by 4MB 00:05:04.472 EAL: Trying to obtain current memory policy. 00:05:04.472 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.472 EAL: Restoring previous memory policy: 4 00:05:04.472 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.472 EAL: request: mp_malloc_sync 00:05:04.472 EAL: No shared files mode enabled, IPC is disabled 00:05:04.472 EAL: Heap on socket 0 was expanded by 6MB 00:05:04.472 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.472 EAL: request: mp_malloc_sync 00:05:04.472 EAL: No shared files mode enabled, IPC is disabled 00:05:04.472 EAL: Heap on socket 0 was shrunk by 6MB 00:05:04.472 EAL: Trying to obtain current memory policy. 00:05:04.472 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.472 EAL: Restoring previous memory policy: 4 00:05:04.472 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.472 EAL: request: mp_malloc_sync 00:05:04.472 EAL: No shared files mode enabled, IPC is disabled 00:05:04.472 EAL: Heap on socket 0 was expanded by 10MB 00:05:04.472 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.472 EAL: request: mp_malloc_sync 00:05:04.472 EAL: No shared files mode enabled, IPC is disabled 00:05:04.472 EAL: Heap on socket 0 was shrunk by 10MB 00:05:04.472 EAL: Trying to obtain current memory policy. 00:05:04.472 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.472 EAL: Restoring previous memory policy: 4 00:05:04.472 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.472 EAL: request: mp_malloc_sync 00:05:04.472 EAL: No shared files mode enabled, IPC is disabled 00:05:04.472 EAL: Heap on socket 0 was expanded by 18MB 00:05:04.472 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.472 EAL: request: mp_malloc_sync 00:05:04.472 EAL: No shared files mode enabled, IPC is disabled 00:05:04.472 EAL: Heap on socket 0 was shrunk by 18MB 00:05:04.472 EAL: Trying to obtain current memory policy. 00:05:04.472 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.472 EAL: Restoring previous memory policy: 4 00:05:04.473 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.473 EAL: request: mp_malloc_sync 00:05:04.473 EAL: No shared files mode enabled, IPC is disabled 00:05:04.473 EAL: Heap on socket 0 was expanded by 34MB 00:05:04.473 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.473 EAL: request: mp_malloc_sync 00:05:04.473 EAL: No shared files mode enabled, IPC is disabled 00:05:04.473 EAL: Heap on socket 0 was shrunk by 34MB 00:05:04.473 EAL: Trying to obtain current memory policy. 00:05:04.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.473 EAL: Restoring previous memory policy: 4 00:05:04.473 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.473 EAL: request: mp_malloc_sync 00:05:04.473 EAL: No shared files mode enabled, IPC is disabled 00:05:04.473 EAL: Heap on socket 0 was expanded by 66MB 00:05:04.473 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.473 EAL: request: mp_malloc_sync 00:05:04.473 EAL: No shared files mode enabled, IPC is disabled 00:05:04.473 EAL: Heap on socket 0 was shrunk by 66MB 00:05:04.473 EAL: Trying to obtain current memory policy. 00:05:04.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.473 EAL: Restoring previous memory policy: 4 00:05:04.473 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.473 EAL: request: mp_malloc_sync 00:05:04.473 EAL: No shared files mode enabled, IPC is disabled 00:05:04.473 EAL: Heap on socket 0 was expanded by 130MB 00:05:04.473 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.473 EAL: request: mp_malloc_sync 00:05:04.473 EAL: No shared files mode enabled, IPC is disabled 00:05:04.473 EAL: Heap on socket 0 was shrunk by 130MB 00:05:04.473 EAL: Trying to obtain current memory policy. 00:05:04.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.733 EAL: Restoring previous memory policy: 4 00:05:04.733 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.733 EAL: request: mp_malloc_sync 00:05:04.733 EAL: No shared files mode enabled, IPC is disabled 00:05:04.733 EAL: Heap on socket 0 was expanded by 258MB 00:05:04.733 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.733 EAL: request: mp_malloc_sync 00:05:04.733 EAL: No shared files mode enabled, IPC is disabled 00:05:04.733 EAL: Heap on socket 0 was shrunk by 258MB 00:05:04.733 EAL: Trying to obtain current memory policy. 00:05:04.733 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.733 EAL: Restoring previous memory policy: 4 00:05:04.733 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.733 EAL: request: mp_malloc_sync 00:05:04.733 EAL: No shared files mode enabled, IPC is disabled 00:05:04.733 EAL: Heap on socket 0 was expanded by 514MB 00:05:04.993 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.993 EAL: request: mp_malloc_sync 00:05:04.993 EAL: No shared files mode enabled, IPC is disabled 00:05:04.993 EAL: Heap on socket 0 was shrunk by 514MB 00:05:04.993 EAL: Trying to obtain current memory policy. 00:05:04.993 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.253 EAL: Restoring previous memory policy: 4 00:05:05.253 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.253 EAL: request: mp_malloc_sync 00:05:05.253 EAL: No shared files mode enabled, IPC is disabled 00:05:05.253 EAL: Heap on socket 0 was expanded by 1026MB 00:05:05.253 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.513 EAL: request: mp_malloc_sync 00:05:05.513 EAL: No shared files mode enabled, IPC is disabled 00:05:05.513 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:05.513 passed 00:05:05.513 00:05:05.513 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.513 suites 1 1 n/a 0 0 00:05:05.513 tests 2 2 2 0 0 00:05:05.513 asserts 497 497 497 0 n/a 00:05:05.513 00:05:05.513 Elapsed time = 0.971 seconds 00:05:05.513 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.513 EAL: request: mp_malloc_sync 00:05:05.513 EAL: No shared files mode enabled, IPC is disabled 00:05:05.513 EAL: Heap on socket 0 was shrunk by 2MB 00:05:05.513 EAL: No shared files mode enabled, IPC is disabled 00:05:05.513 EAL: No shared files mode enabled, IPC is disabled 00:05:05.513 EAL: No shared files mode enabled, IPC is disabled 00:05:05.513 00:05:05.513 real 0m1.104s 00:05:05.513 user 0m0.644s 00:05:05.513 sys 0m0.431s 00:05:05.513 02:46:20 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.513 02:46:20 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:05.513 ************************************ 00:05:05.513 END TEST env_vtophys 00:05:05.513 ************************************ 00:05:05.513 02:46:20 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:05.513 02:46:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.513 02:46:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.513 02:46:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.513 ************************************ 00:05:05.513 START TEST env_pci 00:05:05.513 ************************************ 00:05:05.513 02:46:20 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:05.513 00:05:05.513 00:05:05.513 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.513 http://cunit.sourceforge.net/ 00:05:05.513 00:05:05.513 00:05:05.513 Suite: pci 00:05:05.513 Test: pci_hook ...[2024-12-14 02:46:20.560516] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 103482 has claimed it 00:05:05.513 EAL: Cannot find device (10000:00:01.0) 00:05:05.513 EAL: Failed to attach device on primary process 00:05:05.513 passed 00:05:05.513 00:05:05.513 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.513 suites 1 1 n/a 0 0 00:05:05.513 tests 1 1 1 0 0 00:05:05.513 asserts 25 25 25 0 n/a 00:05:05.513 00:05:05.513 Elapsed time = 0.028 seconds 00:05:05.513 00:05:05.513 real 0m0.046s 00:05:05.513 user 0m0.014s 00:05:05.513 sys 0m0.032s 00:05:05.513 02:46:20 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.513 02:46:20 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:05.513 ************************************ 00:05:05.513 END TEST env_pci 00:05:05.513 ************************************ 00:05:05.513 02:46:20 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:05.513 02:46:20 env -- env/env.sh@15 -- # uname 00:05:05.513 02:46:20 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:05.513 02:46:20 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:05.513 02:46:20 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:05.513 02:46:20 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:05.513 02:46:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.513 02:46:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.774 ************************************ 00:05:05.774 START TEST env_dpdk_post_init 00:05:05.774 ************************************ 00:05:05.774 02:46:20 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:05.774 EAL: Detected CPU lcores: 96 00:05:05.774 EAL: Detected NUMA nodes: 2 00:05:05.774 EAL: Detected shared linkage of DPDK 00:05:05.774 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:05.774 EAL: Selected IOVA mode 'VA' 00:05:05.774 EAL: VFIO support initialized 00:05:05.774 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:05.774 EAL: Using IOMMU type 1 (Type 1) 00:05:05.774 EAL: Ignore mapping IO port bar(1) 00:05:05.774 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:05.774 EAL: Ignore mapping IO port bar(1) 00:05:05.774 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:05.774 EAL: Ignore mapping IO port bar(1) 00:05:05.774 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:05.774 EAL: Ignore mapping IO port bar(1) 00:05:05.774 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:05.774 EAL: Ignore mapping IO port bar(1) 00:05:05.774 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:05.774 EAL: Ignore mapping IO port bar(1) 00:05:05.774 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:05.774 EAL: Ignore mapping IO port bar(1) 00:05:05.774 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:05.774 EAL: Ignore mapping IO port bar(1) 00:05:05.774 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:06.714 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:06.714 EAL: Ignore mapping IO port bar(1) 00:05:06.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:06.714 EAL: Ignore mapping IO port bar(1) 00:05:06.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:06.714 EAL: Ignore mapping IO port bar(1) 00:05:06.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:06.714 EAL: Ignore mapping IO port bar(1) 00:05:06.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:06.714 EAL: Ignore mapping IO port bar(1) 00:05:06.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:06.714 EAL: Ignore mapping IO port bar(1) 00:05:06.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:06.714 EAL: Ignore mapping IO port bar(1) 00:05:06.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:06.714 EAL: Ignore mapping IO port bar(1) 00:05:06.714 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:10.008 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:10.008 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:05:10.008 Starting DPDK initialization... 00:05:10.008 Starting SPDK post initialization... 00:05:10.008 SPDK NVMe probe 00:05:10.008 Attaching to 0000:5e:00.0 00:05:10.008 Attached to 0000:5e:00.0 00:05:10.008 Cleaning up... 00:05:10.008 00:05:10.008 real 0m4.323s 00:05:10.008 user 0m3.247s 00:05:10.008 sys 0m0.147s 00:05:10.008 02:46:24 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.008 02:46:24 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:10.008 ************************************ 00:05:10.008 END TEST env_dpdk_post_init 00:05:10.008 ************************************ 00:05:10.008 02:46:25 env -- env/env.sh@26 -- # uname 00:05:10.008 02:46:25 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:10.008 02:46:25 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:10.008 02:46:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.008 02:46:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.008 02:46:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.008 ************************************ 00:05:10.008 START TEST env_mem_callbacks 00:05:10.008 ************************************ 00:05:10.008 02:46:25 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:10.008 EAL: Detected CPU lcores: 96 00:05:10.008 EAL: Detected NUMA nodes: 2 00:05:10.008 EAL: Detected shared linkage of DPDK 00:05:10.008 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:10.008 EAL: Selected IOVA mode 'VA' 00:05:10.008 EAL: VFIO support initialized 00:05:10.008 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:10.008 00:05:10.008 00:05:10.008 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.008 http://cunit.sourceforge.net/ 00:05:10.008 00:05:10.008 00:05:10.008 Suite: memory 00:05:10.008 Test: test ... 00:05:10.008 register 0x200000200000 2097152 00:05:10.008 malloc 3145728 00:05:10.008 register 0x200000400000 4194304 00:05:10.008 buf 0x200000500000 len 3145728 PASSED 00:05:10.008 malloc 64 00:05:10.008 buf 0x2000004fff40 len 64 PASSED 00:05:10.008 malloc 4194304 00:05:10.008 register 0x200000800000 6291456 00:05:10.008 buf 0x200000a00000 len 4194304 PASSED 00:05:10.008 free 0x200000500000 3145728 00:05:10.008 free 0x2000004fff40 64 00:05:10.008 unregister 0x200000400000 4194304 PASSED 00:05:10.008 free 0x200000a00000 4194304 00:05:10.008 unregister 0x200000800000 6291456 PASSED 00:05:10.008 malloc 8388608 00:05:10.008 register 0x200000400000 10485760 00:05:10.008 buf 0x200000600000 len 8388608 PASSED 00:05:10.008 free 0x200000600000 8388608 00:05:10.008 unregister 0x200000400000 10485760 PASSED 00:05:10.008 passed 00:05:10.008 00:05:10.008 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.008 suites 1 1 n/a 0 0 00:05:10.008 tests 1 1 1 0 0 00:05:10.008 asserts 15 15 15 0 n/a 00:05:10.008 00:05:10.008 Elapsed time = 0.008 seconds 00:05:10.008 00:05:10.009 real 0m0.055s 00:05:10.009 user 0m0.015s 00:05:10.009 sys 0m0.040s 00:05:10.009 02:46:25 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.009 02:46:25 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:10.009 ************************************ 00:05:10.009 END TEST env_mem_callbacks 00:05:10.009 ************************************ 00:05:10.268 00:05:10.268 real 0m6.202s 00:05:10.268 user 0m4.285s 00:05:10.268 sys 0m0.994s 00:05:10.268 02:46:25 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.268 02:46:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.268 ************************************ 00:05:10.268 END TEST env 00:05:10.268 ************************************ 00:05:10.268 02:46:25 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:10.268 02:46:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.268 02:46:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.268 02:46:25 -- common/autotest_common.sh@10 -- # set +x 00:05:10.268 ************************************ 00:05:10.268 START TEST rpc 00:05:10.268 ************************************ 00:05:10.268 02:46:25 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:10.268 * Looking for test storage... 00:05:10.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:10.268 02:46:25 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:10.268 02:46:25 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:10.268 02:46:25 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:10.268 02:46:25 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:10.268 02:46:25 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.268 02:46:25 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.268 02:46:25 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.268 02:46:25 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.268 02:46:25 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.268 02:46:25 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.268 02:46:25 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.268 02:46:25 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.268 02:46:25 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.268 02:46:25 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.268 02:46:25 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.268 02:46:25 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:10.268 02:46:25 rpc -- scripts/common.sh@345 -- # : 1 00:05:10.268 02:46:25 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.268 02:46:25 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.268 02:46:25 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:10.529 02:46:25 rpc -- scripts/common.sh@353 -- # local d=1 00:05:10.529 02:46:25 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.529 02:46:25 rpc -- scripts/common.sh@355 -- # echo 1 00:05:10.529 02:46:25 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.529 02:46:25 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:10.529 02:46:25 rpc -- scripts/common.sh@353 -- # local d=2 00:05:10.529 02:46:25 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.529 02:46:25 rpc -- scripts/common.sh@355 -- # echo 2 00:05:10.529 02:46:25 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.529 02:46:25 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.529 02:46:25 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.529 02:46:25 rpc -- scripts/common.sh@368 -- # return 0 00:05:10.529 02:46:25 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.529 02:46:25 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:10.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.529 --rc genhtml_branch_coverage=1 00:05:10.529 --rc genhtml_function_coverage=1 00:05:10.529 --rc genhtml_legend=1 00:05:10.529 --rc geninfo_all_blocks=1 00:05:10.529 --rc geninfo_unexecuted_blocks=1 00:05:10.529 00:05:10.529 ' 00:05:10.529 02:46:25 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:10.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.529 --rc genhtml_branch_coverage=1 00:05:10.529 --rc genhtml_function_coverage=1 00:05:10.529 --rc genhtml_legend=1 00:05:10.529 --rc geninfo_all_blocks=1 00:05:10.529 --rc geninfo_unexecuted_blocks=1 00:05:10.529 00:05:10.529 ' 00:05:10.529 02:46:25 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:10.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.529 --rc genhtml_branch_coverage=1 00:05:10.529 --rc genhtml_function_coverage=1 00:05:10.529 --rc genhtml_legend=1 00:05:10.529 --rc geninfo_all_blocks=1 00:05:10.529 --rc geninfo_unexecuted_blocks=1 00:05:10.529 00:05:10.529 ' 00:05:10.529 02:46:25 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:10.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.529 --rc genhtml_branch_coverage=1 00:05:10.529 --rc genhtml_function_coverage=1 00:05:10.529 --rc genhtml_legend=1 00:05:10.529 --rc geninfo_all_blocks=1 00:05:10.529 --rc geninfo_unexecuted_blocks=1 00:05:10.529 00:05:10.529 ' 00:05:10.529 02:46:25 rpc -- rpc/rpc.sh@65 -- # spdk_pid=104315 00:05:10.529 02:46:25 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:10.529 02:46:25 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.529 02:46:25 rpc -- rpc/rpc.sh@67 -- # waitforlisten 104315 00:05:10.529 02:46:25 rpc -- common/autotest_common.sh@835 -- # '[' -z 104315 ']' 00:05:10.529 02:46:25 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.529 02:46:25 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.529 02:46:25 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.529 02:46:25 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.529 02:46:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.529 [2024-12-14 02:46:25.464807] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:10.529 [2024-12-14 02:46:25.464850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104315 ] 00:05:10.529 [2024-12-14 02:46:25.536666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.529 [2024-12-14 02:46:25.558755] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:10.529 [2024-12-14 02:46:25.558788] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 104315' to capture a snapshot of events at runtime. 00:05:10.529 [2024-12-14 02:46:25.558795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:10.529 [2024-12-14 02:46:25.558802] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:10.529 [2024-12-14 02:46:25.558806] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid104315 for offline analysis/debug. 00:05:10.529 [2024-12-14 02:46:25.559251] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.789 02:46:25 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.789 02:46:25 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:10.789 02:46:25 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:10.789 02:46:25 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:10.789 02:46:25 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:10.789 02:46:25 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:10.789 02:46:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.789 02:46:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.789 02:46:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.789 ************************************ 00:05:10.789 START TEST rpc_integrity 00:05:10.789 ************************************ 00:05:10.789 02:46:25 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:10.789 02:46:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:10.789 02:46:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.789 02:46:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.789 02:46:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.789 02:46:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:10.789 02:46:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:10.789 02:46:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:10.789 02:46:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:10.789 02:46:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.789 02:46:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.789 02:46:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.789 02:46:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:10.789 02:46:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:10.789 02:46:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.789 02:46:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.789 02:46:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.789 02:46:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:10.789 { 00:05:10.789 "name": "Malloc0", 00:05:10.789 "aliases": [ 00:05:10.790 "2fac5f03-a6d4-420d-84db-77e2d511d51d" 00:05:10.790 ], 00:05:10.790 "product_name": "Malloc disk", 00:05:10.790 "block_size": 512, 00:05:10.790 "num_blocks": 16384, 00:05:10.790 "uuid": "2fac5f03-a6d4-420d-84db-77e2d511d51d", 00:05:10.790 "assigned_rate_limits": { 00:05:10.790 "rw_ios_per_sec": 0, 00:05:10.790 "rw_mbytes_per_sec": 0, 00:05:10.790 "r_mbytes_per_sec": 0, 00:05:10.790 "w_mbytes_per_sec": 0 00:05:10.790 }, 00:05:10.790 "claimed": false, 00:05:10.790 "zoned": false, 00:05:10.790 "supported_io_types": { 00:05:10.790 "read": true, 00:05:10.790 "write": true, 00:05:10.790 "unmap": true, 00:05:10.790 "flush": true, 00:05:10.790 "reset": true, 00:05:10.790 "nvme_admin": false, 00:05:10.790 "nvme_io": false, 00:05:10.790 "nvme_io_md": false, 00:05:10.790 "write_zeroes": true, 00:05:10.790 "zcopy": true, 00:05:10.790 "get_zone_info": false, 00:05:10.790 "zone_management": false, 00:05:10.790 "zone_append": false, 00:05:10.790 "compare": false, 00:05:10.790 "compare_and_write": false, 00:05:10.790 "abort": true, 00:05:10.790 "seek_hole": false, 00:05:10.790 "seek_data": false, 00:05:10.790 "copy": true, 00:05:10.790 "nvme_iov_md": false 00:05:10.790 }, 00:05:10.790 "memory_domains": [ 00:05:10.790 { 00:05:10.790 "dma_device_id": "system", 00:05:10.790 "dma_device_type": 1 00:05:10.790 }, 00:05:10.790 { 00:05:10.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.790 "dma_device_type": 2 00:05:10.790 } 00:05:10.790 ], 00:05:10.790 "driver_specific": {} 00:05:10.790 } 00:05:10.790 ]' 00:05:10.790 02:46:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:11.050 02:46:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:11.050 02:46:25 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:11.050 02:46:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.050 02:46:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.050 [2024-12-14 02:46:25.935517] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:11.050 [2024-12-14 02:46:25.935545] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:11.050 [2024-12-14 02:46:25.935556] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10ebae0 00:05:11.050 [2024-12-14 02:46:25.935562] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:11.050 [2024-12-14 02:46:25.936613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:11.050 [2024-12-14 02:46:25.936634] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:11.050 Passthru0 00:05:11.050 02:46:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.050 02:46:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:11.050 02:46:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.050 02:46:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.050 02:46:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.050 02:46:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:11.050 { 00:05:11.050 "name": "Malloc0", 00:05:11.050 "aliases": [ 00:05:11.050 "2fac5f03-a6d4-420d-84db-77e2d511d51d" 00:05:11.050 ], 00:05:11.050 "product_name": "Malloc disk", 00:05:11.050 "block_size": 512, 00:05:11.050 "num_blocks": 16384, 00:05:11.050 "uuid": "2fac5f03-a6d4-420d-84db-77e2d511d51d", 00:05:11.050 "assigned_rate_limits": { 00:05:11.050 "rw_ios_per_sec": 0, 00:05:11.050 "rw_mbytes_per_sec": 0, 00:05:11.050 "r_mbytes_per_sec": 0, 00:05:11.050 "w_mbytes_per_sec": 0 00:05:11.050 }, 00:05:11.050 "claimed": true, 00:05:11.050 "claim_type": "exclusive_write", 00:05:11.050 "zoned": false, 00:05:11.050 "supported_io_types": { 00:05:11.050 "read": true, 00:05:11.050 "write": true, 00:05:11.050 "unmap": true, 00:05:11.050 "flush": true, 00:05:11.050 "reset": true, 00:05:11.050 "nvme_admin": false, 00:05:11.050 "nvme_io": false, 00:05:11.050 "nvme_io_md": false, 00:05:11.050 "write_zeroes": true, 00:05:11.050 "zcopy": true, 00:05:11.050 "get_zone_info": false, 00:05:11.050 "zone_management": false, 00:05:11.050 "zone_append": false, 00:05:11.050 "compare": false, 00:05:11.050 "compare_and_write": false, 00:05:11.050 "abort": true, 00:05:11.050 "seek_hole": false, 00:05:11.050 "seek_data": false, 00:05:11.050 "copy": true, 00:05:11.050 "nvme_iov_md": false 00:05:11.050 }, 00:05:11.050 "memory_domains": [ 00:05:11.050 { 00:05:11.050 "dma_device_id": "system", 00:05:11.050 "dma_device_type": 1 00:05:11.050 }, 00:05:11.050 { 00:05:11.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.050 "dma_device_type": 2 00:05:11.050 } 00:05:11.050 ], 00:05:11.050 "driver_specific": {} 00:05:11.050 }, 00:05:11.050 { 00:05:11.050 "name": "Passthru0", 00:05:11.050 "aliases": [ 00:05:11.050 "99f45536-6465-50a7-83e2-049abe46197d" 00:05:11.050 ], 00:05:11.050 "product_name": "passthru", 00:05:11.050 "block_size": 512, 00:05:11.050 "num_blocks": 16384, 00:05:11.050 "uuid": "99f45536-6465-50a7-83e2-049abe46197d", 00:05:11.050 "assigned_rate_limits": { 00:05:11.050 "rw_ios_per_sec": 0, 00:05:11.050 "rw_mbytes_per_sec": 0, 00:05:11.050 "r_mbytes_per_sec": 0, 00:05:11.050 "w_mbytes_per_sec": 0 00:05:11.050 }, 00:05:11.050 "claimed": false, 00:05:11.050 "zoned": false, 00:05:11.050 "supported_io_types": { 00:05:11.050 "read": true, 00:05:11.050 "write": true, 00:05:11.050 "unmap": true, 00:05:11.050 "flush": true, 00:05:11.050 "reset": true, 00:05:11.050 "nvme_admin": false, 00:05:11.050 "nvme_io": false, 00:05:11.050 "nvme_io_md": false, 00:05:11.050 "write_zeroes": true, 00:05:11.050 "zcopy": true, 00:05:11.050 "get_zone_info": false, 00:05:11.050 "zone_management": false, 00:05:11.050 "zone_append": false, 00:05:11.050 "compare": false, 00:05:11.050 "compare_and_write": false, 00:05:11.050 "abort": true, 00:05:11.050 "seek_hole": false, 00:05:11.050 "seek_data": false, 00:05:11.050 "copy": true, 00:05:11.050 "nvme_iov_md": false 00:05:11.050 }, 00:05:11.050 "memory_domains": [ 00:05:11.050 { 00:05:11.050 "dma_device_id": "system", 00:05:11.050 "dma_device_type": 1 00:05:11.050 }, 00:05:11.050 { 00:05:11.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.050 "dma_device_type": 2 00:05:11.050 } 00:05:11.050 ], 00:05:11.050 "driver_specific": { 00:05:11.050 "passthru": { 00:05:11.050 "name": "Passthru0", 00:05:11.050 "base_bdev_name": "Malloc0" 00:05:11.050 } 00:05:11.050 } 00:05:11.050 } 00:05:11.050 ]' 00:05:11.050 02:46:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:11.050 02:46:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:11.050 02:46:26 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:11.050 02:46:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.050 02:46:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.050 02:46:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.050 02:46:26 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:11.050 02:46:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.050 02:46:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.050 02:46:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.050 02:46:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:11.050 02:46:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.050 02:46:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.050 02:46:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.051 02:46:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:11.051 02:46:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:11.051 02:46:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:11.051 00:05:11.051 real 0m0.279s 00:05:11.051 user 0m0.169s 00:05:11.051 sys 0m0.043s 00:05:11.051 02:46:26 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.051 02:46:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.051 ************************************ 00:05:11.051 END TEST rpc_integrity 00:05:11.051 ************************************ 00:05:11.051 02:46:26 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:11.051 02:46:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.051 02:46:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.051 02:46:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.051 ************************************ 00:05:11.051 START TEST rpc_plugins 00:05:11.051 ************************************ 00:05:11.051 02:46:26 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:11.051 02:46:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:11.051 02:46:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.051 02:46:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:11.051 02:46:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.051 02:46:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:11.051 02:46:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:11.051 02:46:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.051 02:46:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:11.310 02:46:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.311 02:46:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:11.311 { 00:05:11.311 "name": "Malloc1", 00:05:11.311 "aliases": [ 00:05:11.311 "c387cf51-bcf4-4323-a77b-eb2308342d7e" 00:05:11.311 ], 00:05:11.311 "product_name": "Malloc disk", 00:05:11.311 "block_size": 4096, 00:05:11.311 "num_blocks": 256, 00:05:11.311 "uuid": "c387cf51-bcf4-4323-a77b-eb2308342d7e", 00:05:11.311 "assigned_rate_limits": { 00:05:11.311 "rw_ios_per_sec": 0, 00:05:11.311 "rw_mbytes_per_sec": 0, 00:05:11.311 "r_mbytes_per_sec": 0, 00:05:11.311 "w_mbytes_per_sec": 0 00:05:11.311 }, 00:05:11.311 "claimed": false, 00:05:11.311 "zoned": false, 00:05:11.311 "supported_io_types": { 00:05:11.311 "read": true, 00:05:11.311 "write": true, 00:05:11.311 "unmap": true, 00:05:11.311 "flush": true, 00:05:11.311 "reset": true, 00:05:11.311 "nvme_admin": false, 00:05:11.311 "nvme_io": false, 00:05:11.311 "nvme_io_md": false, 00:05:11.311 "write_zeroes": true, 00:05:11.311 "zcopy": true, 00:05:11.311 "get_zone_info": false, 00:05:11.311 "zone_management": false, 00:05:11.311 "zone_append": false, 00:05:11.311 "compare": false, 00:05:11.311 "compare_and_write": false, 00:05:11.311 "abort": true, 00:05:11.311 "seek_hole": false, 00:05:11.311 "seek_data": false, 00:05:11.311 "copy": true, 00:05:11.311 "nvme_iov_md": false 00:05:11.311 }, 00:05:11.311 "memory_domains": [ 00:05:11.311 { 00:05:11.311 "dma_device_id": "system", 00:05:11.311 "dma_device_type": 1 00:05:11.311 }, 00:05:11.311 { 00:05:11.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.311 "dma_device_type": 2 00:05:11.311 } 00:05:11.311 ], 00:05:11.311 "driver_specific": {} 00:05:11.311 } 00:05:11.311 ]' 00:05:11.311 02:46:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:11.311 02:46:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:11.311 02:46:26 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:11.311 02:46:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.311 02:46:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:11.311 02:46:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.311 02:46:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:11.311 02:46:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.311 02:46:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:11.311 02:46:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.311 02:46:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:11.311 02:46:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:11.311 02:46:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:11.311 00:05:11.311 real 0m0.146s 00:05:11.311 user 0m0.086s 00:05:11.311 sys 0m0.021s 00:05:11.311 02:46:26 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.311 02:46:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:11.311 ************************************ 00:05:11.311 END TEST rpc_plugins 00:05:11.311 ************************************ 00:05:11.311 02:46:26 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:11.311 02:46:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.311 02:46:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.311 02:46:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.311 ************************************ 00:05:11.311 START TEST rpc_trace_cmd_test 00:05:11.311 ************************************ 00:05:11.311 02:46:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:11.311 02:46:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:11.311 02:46:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:11.311 02:46:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.311 02:46:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:11.311 02:46:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.311 02:46:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:11.311 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid104315", 00:05:11.311 "tpoint_group_mask": "0x8", 00:05:11.311 "iscsi_conn": { 00:05:11.311 "mask": "0x2", 00:05:11.311 "tpoint_mask": "0x0" 00:05:11.311 }, 00:05:11.311 "scsi": { 00:05:11.311 "mask": "0x4", 00:05:11.311 "tpoint_mask": "0x0" 00:05:11.311 }, 00:05:11.311 "bdev": { 00:05:11.311 "mask": "0x8", 00:05:11.311 "tpoint_mask": "0xffffffffffffffff" 00:05:11.311 }, 00:05:11.311 "nvmf_rdma": { 00:05:11.311 "mask": "0x10", 00:05:11.311 "tpoint_mask": "0x0" 00:05:11.311 }, 00:05:11.311 "nvmf_tcp": { 00:05:11.311 "mask": "0x20", 00:05:11.311 "tpoint_mask": "0x0" 00:05:11.311 }, 00:05:11.311 "ftl": { 00:05:11.311 "mask": "0x40", 00:05:11.311 "tpoint_mask": "0x0" 00:05:11.311 }, 00:05:11.311 "blobfs": { 00:05:11.311 "mask": "0x80", 00:05:11.311 "tpoint_mask": "0x0" 00:05:11.311 }, 00:05:11.311 "dsa": { 00:05:11.311 "mask": "0x200", 00:05:11.311 "tpoint_mask": "0x0" 00:05:11.311 }, 00:05:11.311 "thread": { 00:05:11.311 "mask": "0x400", 00:05:11.311 "tpoint_mask": "0x0" 00:05:11.311 }, 00:05:11.311 "nvme_pcie": { 00:05:11.311 "mask": "0x800", 00:05:11.311 "tpoint_mask": "0x0" 00:05:11.311 }, 00:05:11.311 "iaa": { 00:05:11.311 "mask": "0x1000", 00:05:11.311 "tpoint_mask": "0x0" 00:05:11.311 }, 00:05:11.311 "nvme_tcp": { 00:05:11.311 "mask": "0x2000", 00:05:11.311 "tpoint_mask": "0x0" 00:05:11.311 }, 00:05:11.311 "bdev_nvme": { 00:05:11.311 "mask": "0x4000", 00:05:11.311 "tpoint_mask": "0x0" 00:05:11.311 }, 00:05:11.311 "sock": { 00:05:11.311 "mask": "0x8000", 00:05:11.311 "tpoint_mask": "0x0" 00:05:11.311 }, 00:05:11.311 "blob": { 00:05:11.311 "mask": "0x10000", 00:05:11.311 "tpoint_mask": "0x0" 00:05:11.311 }, 00:05:11.311 "bdev_raid": { 00:05:11.311 "mask": "0x20000", 00:05:11.311 "tpoint_mask": "0x0" 00:05:11.311 }, 00:05:11.311 "scheduler": { 00:05:11.311 "mask": "0x40000", 00:05:11.311 "tpoint_mask": "0x0" 00:05:11.311 } 00:05:11.311 }' 00:05:11.311 02:46:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:11.311 02:46:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:11.311 02:46:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:11.571 02:46:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:11.571 02:46:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:11.571 02:46:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:11.571 02:46:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:11.571 02:46:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:11.571 02:46:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:11.571 02:46:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:11.571 00:05:11.571 real 0m0.195s 00:05:11.571 user 0m0.159s 00:05:11.571 sys 0m0.028s 00:05:11.571 02:46:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.571 02:46:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:11.571 ************************************ 00:05:11.571 END TEST rpc_trace_cmd_test 00:05:11.571 ************************************ 00:05:11.571 02:46:26 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:11.571 02:46:26 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:11.571 02:46:26 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:11.571 02:46:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.571 02:46:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.571 02:46:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.571 ************************************ 00:05:11.571 START TEST rpc_daemon_integrity 00:05:11.571 ************************************ 00:05:11.571 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:11.571 02:46:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:11.571 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.571 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.571 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.571 02:46:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:11.571 02:46:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:11.571 02:46:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:11.571 02:46:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:11.571 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.571 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.571 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.571 02:46:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:11.571 02:46:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:11.571 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.571 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.831 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.831 02:46:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:11.831 { 00:05:11.831 "name": "Malloc2", 00:05:11.831 "aliases": [ 00:05:11.831 "90269a66-9d63-4db6-85b0-45aa3a6501d6" 00:05:11.831 ], 00:05:11.831 "product_name": "Malloc disk", 00:05:11.831 "block_size": 512, 00:05:11.831 "num_blocks": 16384, 00:05:11.831 "uuid": "90269a66-9d63-4db6-85b0-45aa3a6501d6", 00:05:11.831 "assigned_rate_limits": { 00:05:11.831 "rw_ios_per_sec": 0, 00:05:11.831 "rw_mbytes_per_sec": 0, 00:05:11.831 "r_mbytes_per_sec": 0, 00:05:11.831 "w_mbytes_per_sec": 0 00:05:11.831 }, 00:05:11.831 "claimed": false, 00:05:11.831 "zoned": false, 00:05:11.831 "supported_io_types": { 00:05:11.831 "read": true, 00:05:11.831 "write": true, 00:05:11.831 "unmap": true, 00:05:11.831 "flush": true, 00:05:11.831 "reset": true, 00:05:11.831 "nvme_admin": false, 00:05:11.831 "nvme_io": false, 00:05:11.831 "nvme_io_md": false, 00:05:11.831 "write_zeroes": true, 00:05:11.831 "zcopy": true, 00:05:11.831 "get_zone_info": false, 00:05:11.831 "zone_management": false, 00:05:11.831 "zone_append": false, 00:05:11.831 "compare": false, 00:05:11.831 "compare_and_write": false, 00:05:11.831 "abort": true, 00:05:11.831 "seek_hole": false, 00:05:11.831 "seek_data": false, 00:05:11.831 "copy": true, 00:05:11.831 "nvme_iov_md": false 00:05:11.831 }, 00:05:11.831 "memory_domains": [ 00:05:11.831 { 00:05:11.831 "dma_device_id": "system", 00:05:11.831 "dma_device_type": 1 00:05:11.831 }, 00:05:11.831 { 00:05:11.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.831 "dma_device_type": 2 00:05:11.831 } 00:05:11.831 ], 00:05:11.831 "driver_specific": {} 00:05:11.831 } 00:05:11.831 ]' 00:05:11.831 02:46:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:11.831 02:46:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:11.831 02:46:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:11.831 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.831 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.831 [2024-12-14 02:46:26.765749] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:11.831 [2024-12-14 02:46:26.765775] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:11.831 [2024-12-14 02:46:26.765790] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfa9f80 00:05:11.831 [2024-12-14 02:46:26.765796] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:11.831 [2024-12-14 02:46:26.766750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:11.831 [2024-12-14 02:46:26.766770] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:11.831 Passthru0 00:05:11.831 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.831 02:46:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:11.831 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.831 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.831 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.831 02:46:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:11.831 { 00:05:11.831 "name": "Malloc2", 00:05:11.831 "aliases": [ 00:05:11.831 "90269a66-9d63-4db6-85b0-45aa3a6501d6" 00:05:11.831 ], 00:05:11.831 "product_name": "Malloc disk", 00:05:11.831 "block_size": 512, 00:05:11.831 "num_blocks": 16384, 00:05:11.831 "uuid": "90269a66-9d63-4db6-85b0-45aa3a6501d6", 00:05:11.831 "assigned_rate_limits": { 00:05:11.831 "rw_ios_per_sec": 0, 00:05:11.831 "rw_mbytes_per_sec": 0, 00:05:11.831 "r_mbytes_per_sec": 0, 00:05:11.831 "w_mbytes_per_sec": 0 00:05:11.831 }, 00:05:11.831 "claimed": true, 00:05:11.831 "claim_type": "exclusive_write", 00:05:11.831 "zoned": false, 00:05:11.831 "supported_io_types": { 00:05:11.831 "read": true, 00:05:11.831 "write": true, 00:05:11.831 "unmap": true, 00:05:11.831 "flush": true, 00:05:11.831 "reset": true, 00:05:11.831 "nvme_admin": false, 00:05:11.831 "nvme_io": false, 00:05:11.831 "nvme_io_md": false, 00:05:11.831 "write_zeroes": true, 00:05:11.831 "zcopy": true, 00:05:11.831 "get_zone_info": false, 00:05:11.831 "zone_management": false, 00:05:11.831 "zone_append": false, 00:05:11.831 "compare": false, 00:05:11.831 "compare_and_write": false, 00:05:11.831 "abort": true, 00:05:11.831 "seek_hole": false, 00:05:11.831 "seek_data": false, 00:05:11.831 "copy": true, 00:05:11.831 "nvme_iov_md": false 00:05:11.831 }, 00:05:11.831 "memory_domains": [ 00:05:11.831 { 00:05:11.831 "dma_device_id": "system", 00:05:11.831 "dma_device_type": 1 00:05:11.831 }, 00:05:11.831 { 00:05:11.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.831 "dma_device_type": 2 00:05:11.831 } 00:05:11.831 ], 00:05:11.831 "driver_specific": {} 00:05:11.831 }, 00:05:11.831 { 00:05:11.831 "name": "Passthru0", 00:05:11.831 "aliases": [ 00:05:11.831 "156ba357-4768-54d1-aa28-3150c7708df3" 00:05:11.831 ], 00:05:11.831 "product_name": "passthru", 00:05:11.831 "block_size": 512, 00:05:11.831 "num_blocks": 16384, 00:05:11.831 "uuid": "156ba357-4768-54d1-aa28-3150c7708df3", 00:05:11.831 "assigned_rate_limits": { 00:05:11.831 "rw_ios_per_sec": 0, 00:05:11.831 "rw_mbytes_per_sec": 0, 00:05:11.831 "r_mbytes_per_sec": 0, 00:05:11.831 "w_mbytes_per_sec": 0 00:05:11.831 }, 00:05:11.831 "claimed": false, 00:05:11.831 "zoned": false, 00:05:11.831 "supported_io_types": { 00:05:11.831 "read": true, 00:05:11.831 "write": true, 00:05:11.831 "unmap": true, 00:05:11.831 "flush": true, 00:05:11.831 "reset": true, 00:05:11.831 "nvme_admin": false, 00:05:11.831 "nvme_io": false, 00:05:11.831 "nvme_io_md": false, 00:05:11.831 "write_zeroes": true, 00:05:11.831 "zcopy": true, 00:05:11.831 "get_zone_info": false, 00:05:11.831 "zone_management": false, 00:05:11.831 "zone_append": false, 00:05:11.831 "compare": false, 00:05:11.831 "compare_and_write": false, 00:05:11.831 "abort": true, 00:05:11.831 "seek_hole": false, 00:05:11.831 "seek_data": false, 00:05:11.831 "copy": true, 00:05:11.831 "nvme_iov_md": false 00:05:11.831 }, 00:05:11.831 "memory_domains": [ 00:05:11.831 { 00:05:11.831 "dma_device_id": "system", 00:05:11.831 "dma_device_type": 1 00:05:11.831 }, 00:05:11.831 { 00:05:11.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.831 "dma_device_type": 2 00:05:11.831 } 00:05:11.831 ], 00:05:11.831 "driver_specific": { 00:05:11.831 "passthru": { 00:05:11.831 "name": "Passthru0", 00:05:11.831 "base_bdev_name": "Malloc2" 00:05:11.831 } 00:05:11.831 } 00:05:11.831 } 00:05:11.832 ]' 00:05:11.832 02:46:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:11.832 02:46:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:11.832 02:46:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:11.832 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.832 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.832 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.832 02:46:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:11.832 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.832 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.832 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.832 02:46:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:11.832 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.832 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.832 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.832 02:46:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:11.832 02:46:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:11.832 02:46:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:11.832 00:05:11.832 real 0m0.281s 00:05:11.832 user 0m0.187s 00:05:11.832 sys 0m0.032s 00:05:11.832 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.832 02:46:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.832 ************************************ 00:05:11.832 END TEST rpc_daemon_integrity 00:05:11.832 ************************************ 00:05:11.832 02:46:26 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:11.832 02:46:26 rpc -- rpc/rpc.sh@84 -- # killprocess 104315 00:05:11.832 02:46:26 rpc -- common/autotest_common.sh@954 -- # '[' -z 104315 ']' 00:05:11.832 02:46:26 rpc -- common/autotest_common.sh@958 -- # kill -0 104315 00:05:11.832 02:46:26 rpc -- common/autotest_common.sh@959 -- # uname 00:05:11.832 02:46:26 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.832 02:46:26 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104315 00:05:12.092 02:46:26 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.092 02:46:26 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.092 02:46:26 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104315' 00:05:12.092 killing process with pid 104315 00:05:12.092 02:46:26 rpc -- common/autotest_common.sh@973 -- # kill 104315 00:05:12.092 02:46:26 rpc -- common/autotest_common.sh@978 -- # wait 104315 00:05:12.352 00:05:12.352 real 0m2.058s 00:05:12.352 user 0m2.631s 00:05:12.352 sys 0m0.693s 00:05:12.352 02:46:27 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.352 02:46:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.352 ************************************ 00:05:12.352 END TEST rpc 00:05:12.352 ************************************ 00:05:12.352 02:46:27 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:12.352 02:46:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.352 02:46:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.352 02:46:27 -- common/autotest_common.sh@10 -- # set +x 00:05:12.352 ************************************ 00:05:12.352 START TEST skip_rpc 00:05:12.352 ************************************ 00:05:12.352 02:46:27 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:12.352 * Looking for test storage... 00:05:12.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:12.352 02:46:27 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:12.352 02:46:27 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:12.352 02:46:27 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:12.612 02:46:27 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:12.612 02:46:27 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.612 02:46:27 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.612 02:46:27 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.612 02:46:27 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.612 02:46:27 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.612 02:46:27 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.612 02:46:27 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.612 02:46:27 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.612 02:46:27 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.612 02:46:27 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.612 02:46:27 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.612 02:46:27 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:12.612 02:46:27 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:12.612 02:46:27 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.612 02:46:27 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.613 02:46:27 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:12.613 02:46:27 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:12.613 02:46:27 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.613 02:46:27 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:12.613 02:46:27 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.613 02:46:27 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:12.613 02:46:27 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:12.613 02:46:27 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.613 02:46:27 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:12.613 02:46:27 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.613 02:46:27 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.613 02:46:27 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.613 02:46:27 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:12.613 02:46:27 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.613 02:46:27 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:12.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.613 --rc genhtml_branch_coverage=1 00:05:12.613 --rc genhtml_function_coverage=1 00:05:12.613 --rc genhtml_legend=1 00:05:12.613 --rc geninfo_all_blocks=1 00:05:12.613 --rc geninfo_unexecuted_blocks=1 00:05:12.613 00:05:12.613 ' 00:05:12.613 02:46:27 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:12.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.613 --rc genhtml_branch_coverage=1 00:05:12.613 --rc genhtml_function_coverage=1 00:05:12.613 --rc genhtml_legend=1 00:05:12.613 --rc geninfo_all_blocks=1 00:05:12.613 --rc geninfo_unexecuted_blocks=1 00:05:12.613 00:05:12.613 ' 00:05:12.613 02:46:27 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:12.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.613 --rc genhtml_branch_coverage=1 00:05:12.613 --rc genhtml_function_coverage=1 00:05:12.613 --rc genhtml_legend=1 00:05:12.613 --rc geninfo_all_blocks=1 00:05:12.613 --rc geninfo_unexecuted_blocks=1 00:05:12.613 00:05:12.613 ' 00:05:12.613 02:46:27 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:12.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.613 --rc genhtml_branch_coverage=1 00:05:12.613 --rc genhtml_function_coverage=1 00:05:12.613 --rc genhtml_legend=1 00:05:12.613 --rc geninfo_all_blocks=1 00:05:12.613 --rc geninfo_unexecuted_blocks=1 00:05:12.613 00:05:12.613 ' 00:05:12.613 02:46:27 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:12.613 02:46:27 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:12.613 02:46:27 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:12.613 02:46:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.613 02:46:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.613 02:46:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.613 ************************************ 00:05:12.613 START TEST skip_rpc 00:05:12.613 ************************************ 00:05:12.613 02:46:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:12.613 02:46:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=104938 00:05:12.613 02:46:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:12.613 02:46:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.613 02:46:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:12.613 [2024-12-14 02:46:27.609246] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:12.613 [2024-12-14 02:46:27.609279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104938 ] 00:05:12.613 [2024-12-14 02:46:27.679241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.613 [2024-12-14 02:46:27.701202] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 104938 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 104938 ']' 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 104938 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104938 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104938' 00:05:17.894 killing process with pid 104938 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 104938 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 104938 00:05:17.894 00:05:17.894 real 0m5.359s 00:05:17.894 user 0m5.119s 00:05:17.894 sys 0m0.275s 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.894 02:46:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.894 ************************************ 00:05:17.894 END TEST skip_rpc 00:05:17.894 ************************************ 00:05:17.894 02:46:32 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:17.894 02:46:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.894 02:46:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.894 02:46:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.894 ************************************ 00:05:17.894 START TEST skip_rpc_with_json 00:05:17.894 ************************************ 00:05:17.894 02:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:17.894 02:46:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:17.894 02:46:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=105882 00:05:17.894 02:46:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.894 02:46:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.894 02:46:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 105882 00:05:17.894 02:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 105882 ']' 00:05:17.894 02:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.894 02:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.894 02:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.894 02:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.894 02:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.154 [2024-12-14 02:46:33.048078] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:18.154 [2024-12-14 02:46:33.048116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105882 ] 00:05:18.154 [2024-12-14 02:46:33.123026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.154 [2024-12-14 02:46:33.145753] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.414 02:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.414 02:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:18.414 02:46:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:18.414 02:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.414 02:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.414 [2024-12-14 02:46:33.345886] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:18.414 request: 00:05:18.414 { 00:05:18.414 "trtype": "tcp", 00:05:18.414 "method": "nvmf_get_transports", 00:05:18.414 "req_id": 1 00:05:18.414 } 00:05:18.414 Got JSON-RPC error response 00:05:18.414 response: 00:05:18.414 { 00:05:18.414 "code": -19, 00:05:18.414 "message": "No such device" 00:05:18.414 } 00:05:18.414 02:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:18.414 02:46:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:18.414 02:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.414 02:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.414 [2024-12-14 02:46:33.357989] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:18.414 02:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.414 02:46:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:18.414 02:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.414 02:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.414 02:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.414 02:46:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:18.414 { 00:05:18.414 "subsystems": [ 00:05:18.414 { 00:05:18.414 "subsystem": "fsdev", 00:05:18.414 "config": [ 00:05:18.414 { 00:05:18.414 "method": "fsdev_set_opts", 00:05:18.414 "params": { 00:05:18.414 "fsdev_io_pool_size": 65535, 00:05:18.414 "fsdev_io_cache_size": 256 00:05:18.414 } 00:05:18.414 } 00:05:18.414 ] 00:05:18.414 }, 00:05:18.414 { 00:05:18.414 "subsystem": "vfio_user_target", 00:05:18.414 "config": null 00:05:18.414 }, 00:05:18.414 { 00:05:18.414 "subsystem": "keyring", 00:05:18.414 "config": [] 00:05:18.414 }, 00:05:18.414 { 00:05:18.414 "subsystem": "iobuf", 00:05:18.414 "config": [ 00:05:18.414 { 00:05:18.414 "method": "iobuf_set_options", 00:05:18.414 "params": { 00:05:18.414 "small_pool_count": 8192, 00:05:18.414 "large_pool_count": 1024, 00:05:18.414 "small_bufsize": 8192, 00:05:18.414 "large_bufsize": 135168, 00:05:18.414 "enable_numa": false 00:05:18.414 } 00:05:18.414 } 00:05:18.414 ] 00:05:18.414 }, 00:05:18.414 { 00:05:18.414 "subsystem": "sock", 00:05:18.414 "config": [ 00:05:18.414 { 00:05:18.414 "method": "sock_set_default_impl", 00:05:18.414 "params": { 00:05:18.414 "impl_name": "posix" 00:05:18.414 } 00:05:18.414 }, 00:05:18.414 { 00:05:18.414 "method": "sock_impl_set_options", 00:05:18.414 "params": { 00:05:18.414 "impl_name": "ssl", 00:05:18.414 "recv_buf_size": 4096, 00:05:18.414 "send_buf_size": 4096, 00:05:18.414 "enable_recv_pipe": true, 00:05:18.414 "enable_quickack": false, 00:05:18.414 "enable_placement_id": 0, 00:05:18.414 "enable_zerocopy_send_server": true, 00:05:18.414 "enable_zerocopy_send_client": false, 00:05:18.414 "zerocopy_threshold": 0, 00:05:18.414 "tls_version": 0, 00:05:18.414 "enable_ktls": false 00:05:18.414 } 00:05:18.414 }, 00:05:18.414 { 00:05:18.414 "method": "sock_impl_set_options", 00:05:18.414 "params": { 00:05:18.414 "impl_name": "posix", 00:05:18.414 "recv_buf_size": 2097152, 00:05:18.414 "send_buf_size": 2097152, 00:05:18.414 "enable_recv_pipe": true, 00:05:18.414 "enable_quickack": false, 00:05:18.414 "enable_placement_id": 0, 00:05:18.414 "enable_zerocopy_send_server": true, 00:05:18.414 "enable_zerocopy_send_client": false, 00:05:18.414 "zerocopy_threshold": 0, 00:05:18.414 "tls_version": 0, 00:05:18.414 "enable_ktls": false 00:05:18.414 } 00:05:18.414 } 00:05:18.414 ] 00:05:18.414 }, 00:05:18.414 { 00:05:18.414 "subsystem": "vmd", 00:05:18.414 "config": [] 00:05:18.414 }, 00:05:18.414 { 00:05:18.414 "subsystem": "accel", 00:05:18.414 "config": [ 00:05:18.414 { 00:05:18.414 "method": "accel_set_options", 00:05:18.414 "params": { 00:05:18.414 "small_cache_size": 128, 00:05:18.414 "large_cache_size": 16, 00:05:18.414 "task_count": 2048, 00:05:18.414 "sequence_count": 2048, 00:05:18.414 "buf_count": 2048 00:05:18.414 } 00:05:18.414 } 00:05:18.414 ] 00:05:18.414 }, 00:05:18.414 { 00:05:18.414 "subsystem": "bdev", 00:05:18.414 "config": [ 00:05:18.414 { 00:05:18.414 "method": "bdev_set_options", 00:05:18.414 "params": { 00:05:18.414 "bdev_io_pool_size": 65535, 00:05:18.414 "bdev_io_cache_size": 256, 00:05:18.414 "bdev_auto_examine": true, 00:05:18.414 "iobuf_small_cache_size": 128, 00:05:18.414 "iobuf_large_cache_size": 16 00:05:18.414 } 00:05:18.414 }, 00:05:18.414 { 00:05:18.414 "method": "bdev_raid_set_options", 00:05:18.414 "params": { 00:05:18.414 "process_window_size_kb": 1024, 00:05:18.415 "process_max_bandwidth_mb_sec": 0 00:05:18.415 } 00:05:18.415 }, 00:05:18.415 { 00:05:18.415 "method": "bdev_iscsi_set_options", 00:05:18.415 "params": { 00:05:18.415 "timeout_sec": 30 00:05:18.415 } 00:05:18.415 }, 00:05:18.415 { 00:05:18.415 "method": "bdev_nvme_set_options", 00:05:18.415 "params": { 00:05:18.415 "action_on_timeout": "none", 00:05:18.415 "timeout_us": 0, 00:05:18.415 "timeout_admin_us": 0, 00:05:18.415 "keep_alive_timeout_ms": 10000, 00:05:18.415 "arbitration_burst": 0, 00:05:18.415 "low_priority_weight": 0, 00:05:18.415 "medium_priority_weight": 0, 00:05:18.415 "high_priority_weight": 0, 00:05:18.415 "nvme_adminq_poll_period_us": 10000, 00:05:18.415 "nvme_ioq_poll_period_us": 0, 00:05:18.415 "io_queue_requests": 0, 00:05:18.415 "delay_cmd_submit": true, 00:05:18.415 "transport_retry_count": 4, 00:05:18.415 "bdev_retry_count": 3, 00:05:18.415 "transport_ack_timeout": 0, 00:05:18.415 "ctrlr_loss_timeout_sec": 0, 00:05:18.415 "reconnect_delay_sec": 0, 00:05:18.415 "fast_io_fail_timeout_sec": 0, 00:05:18.415 "disable_auto_failback": false, 00:05:18.415 "generate_uuids": false, 00:05:18.415 "transport_tos": 0, 00:05:18.415 "nvme_error_stat": false, 00:05:18.415 "rdma_srq_size": 0, 00:05:18.415 "io_path_stat": false, 00:05:18.415 "allow_accel_sequence": false, 00:05:18.415 "rdma_max_cq_size": 0, 00:05:18.415 "rdma_cm_event_timeout_ms": 0, 00:05:18.415 "dhchap_digests": [ 00:05:18.415 "sha256", 00:05:18.415 "sha384", 00:05:18.415 "sha512" 00:05:18.415 ], 00:05:18.415 "dhchap_dhgroups": [ 00:05:18.415 "null", 00:05:18.415 "ffdhe2048", 00:05:18.415 "ffdhe3072", 00:05:18.415 "ffdhe4096", 00:05:18.415 "ffdhe6144", 00:05:18.415 "ffdhe8192" 00:05:18.415 ], 00:05:18.415 "rdma_umr_per_io": false 00:05:18.415 } 00:05:18.415 }, 00:05:18.415 { 00:05:18.415 "method": "bdev_nvme_set_hotplug", 00:05:18.415 "params": { 00:05:18.415 "period_us": 100000, 00:05:18.415 "enable": false 00:05:18.415 } 00:05:18.415 }, 00:05:18.415 { 00:05:18.415 "method": "bdev_wait_for_examine" 00:05:18.415 } 00:05:18.415 ] 00:05:18.415 }, 00:05:18.415 { 00:05:18.415 "subsystem": "scsi", 00:05:18.415 "config": null 00:05:18.415 }, 00:05:18.415 { 00:05:18.415 "subsystem": "scheduler", 00:05:18.415 "config": [ 00:05:18.415 { 00:05:18.415 "method": "framework_set_scheduler", 00:05:18.415 "params": { 00:05:18.415 "name": "static" 00:05:18.415 } 00:05:18.415 } 00:05:18.415 ] 00:05:18.415 }, 00:05:18.415 { 00:05:18.415 "subsystem": "vhost_scsi", 00:05:18.415 "config": [] 00:05:18.415 }, 00:05:18.415 { 00:05:18.415 "subsystem": "vhost_blk", 00:05:18.415 "config": [] 00:05:18.415 }, 00:05:18.415 { 00:05:18.415 "subsystem": "ublk", 00:05:18.415 "config": [] 00:05:18.415 }, 00:05:18.415 { 00:05:18.415 "subsystem": "nbd", 00:05:18.415 "config": [] 00:05:18.415 }, 00:05:18.415 { 00:05:18.415 "subsystem": "nvmf", 00:05:18.415 "config": [ 00:05:18.415 { 00:05:18.415 "method": "nvmf_set_config", 00:05:18.415 "params": { 00:05:18.415 "discovery_filter": "match_any", 00:05:18.415 "admin_cmd_passthru": { 00:05:18.415 "identify_ctrlr": false 00:05:18.415 }, 00:05:18.415 "dhchap_digests": [ 00:05:18.415 "sha256", 00:05:18.415 "sha384", 00:05:18.415 "sha512" 00:05:18.415 ], 00:05:18.415 "dhchap_dhgroups": [ 00:05:18.415 "null", 00:05:18.415 "ffdhe2048", 00:05:18.415 "ffdhe3072", 00:05:18.415 "ffdhe4096", 00:05:18.415 "ffdhe6144", 00:05:18.415 "ffdhe8192" 00:05:18.415 ] 00:05:18.415 } 00:05:18.415 }, 00:05:18.415 { 00:05:18.415 "method": "nvmf_set_max_subsystems", 00:05:18.415 "params": { 00:05:18.415 "max_subsystems": 1024 00:05:18.415 } 00:05:18.415 }, 00:05:18.415 { 00:05:18.415 "method": "nvmf_set_crdt", 00:05:18.415 "params": { 00:05:18.415 "crdt1": 0, 00:05:18.415 "crdt2": 0, 00:05:18.415 "crdt3": 0 00:05:18.415 } 00:05:18.415 }, 00:05:18.415 { 00:05:18.415 "method": "nvmf_create_transport", 00:05:18.415 "params": { 00:05:18.415 "trtype": "TCP", 00:05:18.415 "max_queue_depth": 128, 00:05:18.415 "max_io_qpairs_per_ctrlr": 127, 00:05:18.415 "in_capsule_data_size": 4096, 00:05:18.415 "max_io_size": 131072, 00:05:18.415 "io_unit_size": 131072, 00:05:18.415 "max_aq_depth": 128, 00:05:18.415 "num_shared_buffers": 511, 00:05:18.415 "buf_cache_size": 4294967295, 00:05:18.415 "dif_insert_or_strip": false, 00:05:18.415 "zcopy": false, 00:05:18.415 "c2h_success": true, 00:05:18.415 "sock_priority": 0, 00:05:18.415 "abort_timeout_sec": 1, 00:05:18.415 "ack_timeout": 0, 00:05:18.415 "data_wr_pool_size": 0 00:05:18.415 } 00:05:18.415 } 00:05:18.415 ] 00:05:18.415 }, 00:05:18.415 { 00:05:18.415 "subsystem": "iscsi", 00:05:18.415 "config": [ 00:05:18.415 { 00:05:18.415 "method": "iscsi_set_options", 00:05:18.415 "params": { 00:05:18.415 "node_base": "iqn.2016-06.io.spdk", 00:05:18.415 "max_sessions": 128, 00:05:18.415 "max_connections_per_session": 2, 00:05:18.415 "max_queue_depth": 64, 00:05:18.415 "default_time2wait": 2, 00:05:18.415 "default_time2retain": 20, 00:05:18.415 "first_burst_length": 8192, 00:05:18.415 "immediate_data": true, 00:05:18.415 "allow_duplicated_isid": false, 00:05:18.415 "error_recovery_level": 0, 00:05:18.415 "nop_timeout": 60, 00:05:18.415 "nop_in_interval": 30, 00:05:18.415 "disable_chap": false, 00:05:18.415 "require_chap": false, 00:05:18.415 "mutual_chap": false, 00:05:18.415 "chap_group": 0, 00:05:18.415 "max_large_datain_per_connection": 64, 00:05:18.415 "max_r2t_per_connection": 4, 00:05:18.415 "pdu_pool_size": 36864, 00:05:18.415 "immediate_data_pool_size": 16384, 00:05:18.415 "data_out_pool_size": 2048 00:05:18.415 } 00:05:18.415 } 00:05:18.415 ] 00:05:18.415 } 00:05:18.415 ] 00:05:18.415 } 00:05:18.415 02:46:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:18.415 02:46:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 105882 00:05:18.415 02:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 105882 ']' 00:05:18.415 02:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 105882 00:05:18.415 02:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:18.415 02:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.415 02:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105882 00:05:18.675 02:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.675 02:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.675 02:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105882' 00:05:18.675 killing process with pid 105882 00:05:18.675 02:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 105882 00:05:18.675 02:46:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 105882 00:05:18.934 02:46:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=105963 00:05:18.934 02:46:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:18.934 02:46:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:24.206 02:46:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 105963 00:05:24.206 02:46:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 105963 ']' 00:05:24.206 02:46:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 105963 00:05:24.206 02:46:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:24.206 02:46:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.206 02:46:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105963 00:05:24.206 02:46:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.206 02:46:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.206 02:46:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105963' 00:05:24.206 killing process with pid 105963 00:05:24.206 02:46:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 105963 00:05:24.206 02:46:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 105963 00:05:24.206 02:46:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:24.206 02:46:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:24.206 00:05:24.206 real 0m6.232s 00:05:24.206 user 0m5.909s 00:05:24.206 sys 0m0.624s 00:05:24.206 02:46:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.206 02:46:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.206 ************************************ 00:05:24.206 END TEST skip_rpc_with_json 00:05:24.206 ************************************ 00:05:24.206 02:46:39 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:24.206 02:46:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.206 02:46:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.206 02:46:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.206 ************************************ 00:05:24.206 START TEST skip_rpc_with_delay 00:05:24.206 ************************************ 00:05:24.206 02:46:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:24.206 02:46:39 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.206 02:46:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:24.206 02:46:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.206 02:46:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.206 02:46:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.206 02:46:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.206 02:46:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.206 02:46:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.206 02:46:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.206 02:46:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.206 02:46:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:24.206 02:46:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.466 [2024-12-14 02:46:39.358500] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:24.466 02:46:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:24.466 02:46:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:24.466 02:46:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:24.466 02:46:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:24.466 00:05:24.466 real 0m0.070s 00:05:24.466 user 0m0.045s 00:05:24.466 sys 0m0.024s 00:05:24.466 02:46:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.466 02:46:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:24.466 ************************************ 00:05:24.466 END TEST skip_rpc_with_delay 00:05:24.466 ************************************ 00:05:24.466 02:46:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:24.466 02:46:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:24.466 02:46:39 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:24.466 02:46:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.466 02:46:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.466 02:46:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.466 ************************************ 00:05:24.466 START TEST exit_on_failed_rpc_init 00:05:24.466 ************************************ 00:05:24.466 02:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:24.466 02:46:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=107006 00:05:24.466 02:46:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 107006 00:05:24.466 02:46:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.466 02:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 107006 ']' 00:05:24.466 02:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.466 02:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.466 02:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.466 02:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.466 02:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:24.466 [2024-12-14 02:46:39.502279] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:24.466 [2024-12-14 02:46:39.502333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107006 ] 00:05:24.466 [2024-12-14 02:46:39.578317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.725 [2024-12-14 02:46:39.601664] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.725 02:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.725 02:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:24.725 02:46:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.725 02:46:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:24.725 02:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:24.725 02:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:24.725 02:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.725 02:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.725 02:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.725 02:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.725 02:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.725 02:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.725 02:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.725 02:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:24.725 02:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:24.985 [2024-12-14 02:46:39.863937] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:24.985 [2024-12-14 02:46:39.863977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107063 ] 00:05:24.985 [2024-12-14 02:46:39.937003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.985 [2024-12-14 02:46:39.959184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.985 [2024-12-14 02:46:39.959239] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:24.985 [2024-12-14 02:46:39.959248] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:24.985 [2024-12-14 02:46:39.959254] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:24.985 02:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:24.985 02:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:24.985 02:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:24.985 02:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:24.985 02:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:24.985 02:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:24.985 02:46:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:24.985 02:46:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 107006 00:05:24.985 02:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 107006 ']' 00:05:24.985 02:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 107006 00:05:24.985 02:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:24.985 02:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.985 02:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107006 00:05:24.985 02:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.985 02:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.985 02:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107006' 00:05:24.985 killing process with pid 107006 00:05:24.985 02:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 107006 00:05:24.985 02:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 107006 00:05:25.245 00:05:25.245 real 0m0.897s 00:05:25.245 user 0m0.914s 00:05:25.245 sys 0m0.406s 00:05:25.245 02:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.245 02:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:25.245 ************************************ 00:05:25.245 END TEST exit_on_failed_rpc_init 00:05:25.245 ************************************ 00:05:25.505 02:46:40 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:25.505 00:05:25.505 real 0m13.028s 00:05:25.505 user 0m12.196s 00:05:25.505 sys 0m1.621s 00:05:25.505 02:46:40 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.505 02:46:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.505 ************************************ 00:05:25.505 END TEST skip_rpc 00:05:25.505 ************************************ 00:05:25.505 02:46:40 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:25.505 02:46:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.505 02:46:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.505 02:46:40 -- common/autotest_common.sh@10 -- # set +x 00:05:25.505 ************************************ 00:05:25.505 START TEST rpc_client 00:05:25.505 ************************************ 00:05:25.505 02:46:40 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:25.505 * Looking for test storage... 00:05:25.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:25.505 02:46:40 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:25.505 02:46:40 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:25.505 02:46:40 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:25.505 02:46:40 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.505 02:46:40 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:25.505 02:46:40 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.505 02:46:40 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:25.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.505 --rc genhtml_branch_coverage=1 00:05:25.505 --rc genhtml_function_coverage=1 00:05:25.505 --rc genhtml_legend=1 00:05:25.505 --rc geninfo_all_blocks=1 00:05:25.505 --rc geninfo_unexecuted_blocks=1 00:05:25.505 00:05:25.505 ' 00:05:25.505 02:46:40 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:25.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.505 --rc genhtml_branch_coverage=1 00:05:25.505 --rc genhtml_function_coverage=1 00:05:25.505 --rc genhtml_legend=1 00:05:25.505 --rc geninfo_all_blocks=1 00:05:25.505 --rc geninfo_unexecuted_blocks=1 00:05:25.505 00:05:25.505 ' 00:05:25.505 02:46:40 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:25.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.505 --rc genhtml_branch_coverage=1 00:05:25.505 --rc genhtml_function_coverage=1 00:05:25.505 --rc genhtml_legend=1 00:05:25.505 --rc geninfo_all_blocks=1 00:05:25.505 --rc geninfo_unexecuted_blocks=1 00:05:25.505 00:05:25.505 ' 00:05:25.505 02:46:40 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:25.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.505 --rc genhtml_branch_coverage=1 00:05:25.505 --rc genhtml_function_coverage=1 00:05:25.505 --rc genhtml_legend=1 00:05:25.505 --rc geninfo_all_blocks=1 00:05:25.505 --rc geninfo_unexecuted_blocks=1 00:05:25.505 00:05:25.505 ' 00:05:25.505 02:46:40 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:25.765 OK 00:05:25.765 02:46:40 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:25.765 00:05:25.765 real 0m0.190s 00:05:25.765 user 0m0.108s 00:05:25.765 sys 0m0.094s 00:05:25.765 02:46:40 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.765 02:46:40 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:25.765 ************************************ 00:05:25.765 END TEST rpc_client 00:05:25.765 ************************************ 00:05:25.765 02:46:40 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:25.765 02:46:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.765 02:46:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.765 02:46:40 -- common/autotest_common.sh@10 -- # set +x 00:05:25.765 ************************************ 00:05:25.765 START TEST json_config 00:05:25.765 ************************************ 00:05:25.765 02:46:40 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:25.765 02:46:40 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:25.765 02:46:40 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:25.765 02:46:40 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:25.765 02:46:40 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:25.765 02:46:40 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.765 02:46:40 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.765 02:46:40 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.765 02:46:40 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.765 02:46:40 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.765 02:46:40 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.765 02:46:40 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.765 02:46:40 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.765 02:46:40 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.766 02:46:40 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.766 02:46:40 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.766 02:46:40 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:25.766 02:46:40 json_config -- scripts/common.sh@345 -- # : 1 00:05:25.766 02:46:40 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.766 02:46:40 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.766 02:46:40 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:25.766 02:46:40 json_config -- scripts/common.sh@353 -- # local d=1 00:05:25.766 02:46:40 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.766 02:46:40 json_config -- scripts/common.sh@355 -- # echo 1 00:05:25.766 02:46:40 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.766 02:46:40 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:25.766 02:46:40 json_config -- scripts/common.sh@353 -- # local d=2 00:05:25.766 02:46:40 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.766 02:46:40 json_config -- scripts/common.sh@355 -- # echo 2 00:05:25.766 02:46:40 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.766 02:46:40 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.766 02:46:40 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.766 02:46:40 json_config -- scripts/common.sh@368 -- # return 0 00:05:25.766 02:46:40 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.766 02:46:40 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:25.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.766 --rc genhtml_branch_coverage=1 00:05:25.766 --rc genhtml_function_coverage=1 00:05:25.766 --rc genhtml_legend=1 00:05:25.766 --rc geninfo_all_blocks=1 00:05:25.766 --rc geninfo_unexecuted_blocks=1 00:05:25.766 00:05:25.766 ' 00:05:25.766 02:46:40 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:25.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.766 --rc genhtml_branch_coverage=1 00:05:25.766 --rc genhtml_function_coverage=1 00:05:25.766 --rc genhtml_legend=1 00:05:25.766 --rc geninfo_all_blocks=1 00:05:25.766 --rc geninfo_unexecuted_blocks=1 00:05:25.766 00:05:25.766 ' 00:05:25.766 02:46:40 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:25.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.766 --rc genhtml_branch_coverage=1 00:05:25.766 --rc genhtml_function_coverage=1 00:05:25.766 --rc genhtml_legend=1 00:05:25.766 --rc geninfo_all_blocks=1 00:05:25.766 --rc geninfo_unexecuted_blocks=1 00:05:25.766 00:05:25.766 ' 00:05:25.766 02:46:40 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:25.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.766 --rc genhtml_branch_coverage=1 00:05:25.766 --rc genhtml_function_coverage=1 00:05:25.766 --rc genhtml_legend=1 00:05:25.766 --rc geninfo_all_blocks=1 00:05:25.766 --rc geninfo_unexecuted_blocks=1 00:05:25.766 00:05:25.766 ' 00:05:25.766 02:46:40 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:25.766 02:46:40 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:25.766 02:46:40 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:25.766 02:46:40 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:25.766 02:46:40 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:25.766 02:46:40 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.766 02:46:40 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.766 02:46:40 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.766 02:46:40 json_config -- paths/export.sh@5 -- # export PATH 00:05:25.766 02:46:40 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@51 -- # : 0 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:25.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:25.766 02:46:40 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:25.766 02:46:40 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:25.766 02:46:40 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:25.766 02:46:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:26.026 02:46:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:26.026 02:46:40 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:26.026 02:46:40 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:26.026 02:46:40 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:26.026 02:46:40 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:26.026 02:46:40 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:26.026 02:46:40 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:26.026 02:46:40 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:26.026 02:46:40 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:26.026 02:46:40 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:26.026 02:46:40 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:26.026 02:46:40 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:26.026 02:46:40 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:26.026 INFO: JSON configuration test init 00:05:26.026 02:46:40 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:26.026 02:46:40 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:26.026 02:46:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.026 02:46:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.026 02:46:40 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:26.026 02:46:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.026 02:46:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.026 02:46:40 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:26.026 02:46:40 json_config -- json_config/common.sh@9 -- # local app=target 00:05:26.026 02:46:40 json_config -- json_config/common.sh@10 -- # shift 00:05:26.026 02:46:40 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:26.026 02:46:40 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:26.026 02:46:40 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:26.026 02:46:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.026 02:46:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.026 02:46:40 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=107409 00:05:26.026 02:46:40 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:26.026 Waiting for target to run... 00:05:26.026 02:46:40 json_config -- json_config/common.sh@25 -- # waitforlisten 107409 /var/tmp/spdk_tgt.sock 00:05:26.026 02:46:40 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:26.026 02:46:40 json_config -- common/autotest_common.sh@835 -- # '[' -z 107409 ']' 00:05:26.026 02:46:40 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:26.026 02:46:40 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.026 02:46:40 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:26.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:26.026 02:46:40 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.026 02:46:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.026 [2024-12-14 02:46:40.966125] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:26.026 [2024-12-14 02:46:40.966170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107409 ] 00:05:26.285 [2024-12-14 02:46:41.412718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.544 [2024-12-14 02:46:41.434040] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.803 02:46:41 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.803 02:46:41 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:26.803 02:46:41 json_config -- json_config/common.sh@26 -- # echo '' 00:05:26.803 00:05:26.803 02:46:41 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:26.803 02:46:41 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:26.803 02:46:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.803 02:46:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.803 02:46:41 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:26.803 02:46:41 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:26.803 02:46:41 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.803 02:46:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.803 02:46:41 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:26.803 02:46:41 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:26.803 02:46:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:30.095 02:46:44 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:30.095 02:46:44 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:30.095 02:46:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.095 02:46:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.095 02:46:44 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:30.095 02:46:44 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:30.095 02:46:44 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:30.095 02:46:44 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:30.095 02:46:44 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:30.095 02:46:44 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:30.095 02:46:44 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:30.095 02:46:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:30.095 02:46:45 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:30.095 02:46:45 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:30.095 02:46:45 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:30.095 02:46:45 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:30.095 02:46:45 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:30.095 02:46:45 json_config -- json_config/json_config.sh@54 -- # sort 00:05:30.095 02:46:45 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:30.095 02:46:45 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:30.095 02:46:45 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:30.095 02:46:45 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:30.095 02:46:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:30.096 02:46:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.096 02:46:45 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:30.096 02:46:45 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:30.096 02:46:45 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:30.096 02:46:45 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:30.096 02:46:45 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:30.096 02:46:45 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:30.096 02:46:45 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:30.096 02:46:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.096 02:46:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.096 02:46:45 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:30.096 02:46:45 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:30.096 02:46:45 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:30.096 02:46:45 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:30.096 02:46:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:30.358 MallocForNvmf0 00:05:30.358 02:46:45 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:30.358 02:46:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:30.617 MallocForNvmf1 00:05:30.617 02:46:45 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:30.617 02:46:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:30.617 [2024-12-14 02:46:45.719788] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:30.617 02:46:45 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:30.876 02:46:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:30.876 02:46:45 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:30.876 02:46:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:31.135 02:46:46 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:31.135 02:46:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:31.395 02:46:46 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:31.395 02:46:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:31.395 [2024-12-14 02:46:46.518201] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:31.654 02:46:46 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:31.654 02:46:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:31.654 02:46:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.654 02:46:46 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:31.654 02:46:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:31.654 02:46:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.654 02:46:46 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:31.654 02:46:46 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:31.654 02:46:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:31.654 MallocBdevForConfigChangeCheck 00:05:31.912 02:46:46 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:31.912 02:46:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:31.912 02:46:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.912 02:46:46 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:31.912 02:46:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.171 02:46:47 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:32.171 INFO: shutting down applications... 00:05:32.171 02:46:47 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:32.171 02:46:47 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:32.171 02:46:47 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:32.171 02:46:47 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:34.076 Calling clear_iscsi_subsystem 00:05:34.077 Calling clear_nvmf_subsystem 00:05:34.077 Calling clear_nbd_subsystem 00:05:34.077 Calling clear_ublk_subsystem 00:05:34.077 Calling clear_vhost_blk_subsystem 00:05:34.077 Calling clear_vhost_scsi_subsystem 00:05:34.077 Calling clear_bdev_subsystem 00:05:34.077 02:46:48 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:34.077 02:46:48 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:34.077 02:46:48 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:34.077 02:46:48 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:34.077 02:46:48 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:34.077 02:46:48 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:34.077 02:46:49 json_config -- json_config/json_config.sh@352 -- # break 00:05:34.077 02:46:49 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:34.077 02:46:49 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:34.077 02:46:49 json_config -- json_config/common.sh@31 -- # local app=target 00:05:34.077 02:46:49 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:34.077 02:46:49 json_config -- json_config/common.sh@35 -- # [[ -n 107409 ]] 00:05:34.077 02:46:49 json_config -- json_config/common.sh@38 -- # kill -SIGINT 107409 00:05:34.077 02:46:49 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:34.077 02:46:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.077 02:46:49 json_config -- json_config/common.sh@41 -- # kill -0 107409 00:05:34.077 02:46:49 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:34.645 02:46:49 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:34.645 02:46:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.645 02:46:49 json_config -- json_config/common.sh@41 -- # kill -0 107409 00:05:34.645 02:46:49 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:34.645 02:46:49 json_config -- json_config/common.sh@43 -- # break 00:05:34.645 02:46:49 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:34.645 02:46:49 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:34.645 SPDK target shutdown done 00:05:34.645 02:46:49 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:34.645 INFO: relaunching applications... 00:05:34.645 02:46:49 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.645 02:46:49 json_config -- json_config/common.sh@9 -- # local app=target 00:05:34.645 02:46:49 json_config -- json_config/common.sh@10 -- # shift 00:05:34.645 02:46:49 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:34.645 02:46:49 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:34.645 02:46:49 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:34.645 02:46:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.645 02:46:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.645 02:46:49 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=108896 00:05:34.645 02:46:49 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:34.645 Waiting for target to run... 00:05:34.645 02:46:49 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.645 02:46:49 json_config -- json_config/common.sh@25 -- # waitforlisten 108896 /var/tmp/spdk_tgt.sock 00:05:34.645 02:46:49 json_config -- common/autotest_common.sh@835 -- # '[' -z 108896 ']' 00:05:34.645 02:46:49 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:34.645 02:46:49 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.645 02:46:49 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:34.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:34.645 02:46:49 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.645 02:46:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.645 [2024-12-14 02:46:49.718309] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:34.645 [2024-12-14 02:46:49.718379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108896 ] 00:05:35.213 [2024-12-14 02:46:50.188651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.213 [2024-12-14 02:46:50.209048] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.502 [2024-12-14 02:46:53.217101] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:38.502 [2024-12-14 02:46:53.249345] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:39.070 02:46:53 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.070 02:46:53 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:39.070 02:46:53 json_config -- json_config/common.sh@26 -- # echo '' 00:05:39.070 00:05:39.070 02:46:53 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:39.070 02:46:53 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:39.070 INFO: Checking if target configuration is the same... 00:05:39.070 02:46:53 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.070 02:46:53 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:39.070 02:46:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.070 + '[' 2 -ne 2 ']' 00:05:39.070 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:39.070 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:39.070 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:39.070 +++ basename /dev/fd/62 00:05:39.070 ++ mktemp /tmp/62.XXX 00:05:39.070 + tmp_file_1=/tmp/62.jt2 00:05:39.070 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.070 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:39.070 + tmp_file_2=/tmp/spdk_tgt_config.json.ty5 00:05:39.070 + ret=0 00:05:39.070 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.330 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.330 + diff -u /tmp/62.jt2 /tmp/spdk_tgt_config.json.ty5 00:05:39.330 + echo 'INFO: JSON config files are the same' 00:05:39.330 INFO: JSON config files are the same 00:05:39.330 + rm /tmp/62.jt2 /tmp/spdk_tgt_config.json.ty5 00:05:39.330 + exit 0 00:05:39.330 02:46:54 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:39.330 02:46:54 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:39.330 INFO: changing configuration and checking if this can be detected... 00:05:39.330 02:46:54 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:39.330 02:46:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:39.588 02:46:54 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.588 02:46:54 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:39.588 02:46:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.588 + '[' 2 -ne 2 ']' 00:05:39.588 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:39.588 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:39.588 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:39.588 +++ basename /dev/fd/62 00:05:39.588 ++ mktemp /tmp/62.XXX 00:05:39.588 + tmp_file_1=/tmp/62.vJz 00:05:39.588 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.588 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:39.588 + tmp_file_2=/tmp/spdk_tgt_config.json.2D7 00:05:39.588 + ret=0 00:05:39.588 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.847 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.847 + diff -u /tmp/62.vJz /tmp/spdk_tgt_config.json.2D7 00:05:39.847 + ret=1 00:05:39.847 + echo '=== Start of file: /tmp/62.vJz ===' 00:05:39.847 + cat /tmp/62.vJz 00:05:39.847 + echo '=== End of file: /tmp/62.vJz ===' 00:05:39.847 + echo '' 00:05:39.847 + echo '=== Start of file: /tmp/spdk_tgt_config.json.2D7 ===' 00:05:39.847 + cat /tmp/spdk_tgt_config.json.2D7 00:05:39.847 + echo '=== End of file: /tmp/spdk_tgt_config.json.2D7 ===' 00:05:39.847 + echo '' 00:05:39.847 + rm /tmp/62.vJz /tmp/spdk_tgt_config.json.2D7 00:05:39.847 + exit 1 00:05:39.847 02:46:54 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:39.847 INFO: configuration change detected. 00:05:39.847 02:46:54 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:39.847 02:46:54 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:39.847 02:46:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:39.848 02:46:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.848 02:46:54 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:39.848 02:46:54 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:39.848 02:46:54 json_config -- json_config/json_config.sh@324 -- # [[ -n 108896 ]] 00:05:39.848 02:46:54 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:39.848 02:46:54 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:39.848 02:46:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:39.848 02:46:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.848 02:46:54 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:39.848 02:46:54 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:39.848 02:46:54 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:39.848 02:46:54 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:39.848 02:46:54 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:39.848 02:46:54 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:39.848 02:46:54 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:39.848 02:46:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.106 02:46:55 json_config -- json_config/json_config.sh@330 -- # killprocess 108896 00:05:40.106 02:46:55 json_config -- common/autotest_common.sh@954 -- # '[' -z 108896 ']' 00:05:40.106 02:46:55 json_config -- common/autotest_common.sh@958 -- # kill -0 108896 00:05:40.106 02:46:55 json_config -- common/autotest_common.sh@959 -- # uname 00:05:40.106 02:46:55 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.106 02:46:55 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108896 00:05:40.106 02:46:55 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.106 02:46:55 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.106 02:46:55 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108896' 00:05:40.106 killing process with pid 108896 00:05:40.106 02:46:55 json_config -- common/autotest_common.sh@973 -- # kill 108896 00:05:40.106 02:46:55 json_config -- common/autotest_common.sh@978 -- # wait 108896 00:05:41.483 02:46:56 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.483 02:46:56 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:41.483 02:46:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:41.483 02:46:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.483 02:46:56 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:41.483 02:46:56 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:41.483 INFO: Success 00:05:41.483 00:05:41.483 real 0m15.875s 00:05:41.483 user 0m17.043s 00:05:41.483 sys 0m2.057s 00:05:41.483 02:46:56 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.483 02:46:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.483 ************************************ 00:05:41.483 END TEST json_config 00:05:41.483 ************************************ 00:05:41.742 02:46:56 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:41.742 02:46:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.742 02:46:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.742 02:46:56 -- common/autotest_common.sh@10 -- # set +x 00:05:41.742 ************************************ 00:05:41.742 START TEST json_config_extra_key 00:05:41.742 ************************************ 00:05:41.742 02:46:56 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:41.742 02:46:56 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:41.742 02:46:56 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:41.742 02:46:56 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:41.742 02:46:56 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:41.742 02:46:56 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.742 02:46:56 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.742 02:46:56 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.742 02:46:56 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.742 02:46:56 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.742 02:46:56 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.742 02:46:56 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.742 02:46:56 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.742 02:46:56 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.743 02:46:56 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.743 02:46:56 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.743 02:46:56 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:41.743 02:46:56 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:41.743 02:46:56 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.743 02:46:56 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.743 02:46:56 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:41.743 02:46:56 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:41.743 02:46:56 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.743 02:46:56 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:41.743 02:46:56 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.743 02:46:56 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:41.743 02:46:56 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:41.743 02:46:56 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.743 02:46:56 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:41.743 02:46:56 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.743 02:46:56 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.743 02:46:56 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.743 02:46:56 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:41.743 02:46:56 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.743 02:46:56 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:41.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.743 --rc genhtml_branch_coverage=1 00:05:41.743 --rc genhtml_function_coverage=1 00:05:41.743 --rc genhtml_legend=1 00:05:41.743 --rc geninfo_all_blocks=1 00:05:41.743 --rc geninfo_unexecuted_blocks=1 00:05:41.743 00:05:41.743 ' 00:05:41.743 02:46:56 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:41.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.743 --rc genhtml_branch_coverage=1 00:05:41.743 --rc genhtml_function_coverage=1 00:05:41.743 --rc genhtml_legend=1 00:05:41.743 --rc geninfo_all_blocks=1 00:05:41.743 --rc geninfo_unexecuted_blocks=1 00:05:41.743 00:05:41.743 ' 00:05:41.743 02:46:56 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:41.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.743 --rc genhtml_branch_coverage=1 00:05:41.743 --rc genhtml_function_coverage=1 00:05:41.743 --rc genhtml_legend=1 00:05:41.743 --rc geninfo_all_blocks=1 00:05:41.743 --rc geninfo_unexecuted_blocks=1 00:05:41.743 00:05:41.743 ' 00:05:41.743 02:46:56 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:41.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.743 --rc genhtml_branch_coverage=1 00:05:41.743 --rc genhtml_function_coverage=1 00:05:41.743 --rc genhtml_legend=1 00:05:41.743 --rc geninfo_all_blocks=1 00:05:41.743 --rc geninfo_unexecuted_blocks=1 00:05:41.743 00:05:41.743 ' 00:05:41.743 02:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:41.743 02:46:56 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:41.743 02:46:56 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.743 02:46:56 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.743 02:46:56 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.743 02:46:56 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.743 02:46:56 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.743 02:46:56 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.743 02:46:56 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:41.743 02:46:56 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:41.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:41.743 02:46:56 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:41.743 02:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:41.743 02:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:41.743 02:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:41.743 02:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:41.743 02:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:41.743 02:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:41.743 02:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:41.743 02:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:41.743 02:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:41.743 02:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:41.743 02:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:41.743 INFO: launching applications... 00:05:41.743 02:46:56 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:41.743 02:46:56 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:41.743 02:46:56 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:41.743 02:46:56 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:41.743 02:46:56 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:41.743 02:46:56 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:41.743 02:46:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.743 02:46:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.743 02:46:56 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=110196 00:05:41.743 02:46:56 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:41.743 Waiting for target to run... 00:05:41.743 02:46:56 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 110196 /var/tmp/spdk_tgt.sock 00:05:41.743 02:46:56 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 110196 ']' 00:05:41.743 02:46:56 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:41.743 02:46:56 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.743 02:46:56 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.743 02:46:56 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.743 02:46:56 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.743 02:46:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:42.003 [2024-12-14 02:46:56.893886] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:42.003 [2024-12-14 02:46:56.893931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110196 ] 00:05:42.262 [2024-12-14 02:46:57.175034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.262 [2024-12-14 02:46:57.187375] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.831 02:46:57 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.831 02:46:57 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:42.831 02:46:57 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:42.831 00:05:42.831 02:46:57 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:42.831 INFO: shutting down applications... 00:05:42.831 02:46:57 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:42.831 02:46:57 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:42.831 02:46:57 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:42.831 02:46:57 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 110196 ]] 00:05:42.831 02:46:57 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 110196 00:05:42.831 02:46:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:42.831 02:46:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.831 02:46:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 110196 00:05:42.831 02:46:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:43.401 02:46:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:43.401 02:46:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:43.401 02:46:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 110196 00:05:43.401 02:46:58 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:43.401 02:46:58 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:43.401 02:46:58 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:43.401 02:46:58 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:43.401 SPDK target shutdown done 00:05:43.401 02:46:58 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:43.401 Success 00:05:43.401 00:05:43.401 real 0m1.582s 00:05:43.401 user 0m1.364s 00:05:43.401 sys 0m0.399s 00:05:43.401 02:46:58 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.401 02:46:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:43.401 ************************************ 00:05:43.401 END TEST json_config_extra_key 00:05:43.401 ************************************ 00:05:43.401 02:46:58 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:43.401 02:46:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.401 02:46:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.401 02:46:58 -- common/autotest_common.sh@10 -- # set +x 00:05:43.401 ************************************ 00:05:43.401 START TEST alias_rpc 00:05:43.401 ************************************ 00:05:43.401 02:46:58 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:43.401 * Looking for test storage... 00:05:43.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:43.401 02:46:58 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:43.401 02:46:58 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:43.401 02:46:58 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:43.401 02:46:58 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.401 02:46:58 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:43.401 02:46:58 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.401 02:46:58 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:43.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.401 --rc genhtml_branch_coverage=1 00:05:43.401 --rc genhtml_function_coverage=1 00:05:43.401 --rc genhtml_legend=1 00:05:43.401 --rc geninfo_all_blocks=1 00:05:43.401 --rc geninfo_unexecuted_blocks=1 00:05:43.401 00:05:43.401 ' 00:05:43.401 02:46:58 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:43.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.401 --rc genhtml_branch_coverage=1 00:05:43.401 --rc genhtml_function_coverage=1 00:05:43.401 --rc genhtml_legend=1 00:05:43.401 --rc geninfo_all_blocks=1 00:05:43.401 --rc geninfo_unexecuted_blocks=1 00:05:43.401 00:05:43.401 ' 00:05:43.401 02:46:58 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:43.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.401 --rc genhtml_branch_coverage=1 00:05:43.401 --rc genhtml_function_coverage=1 00:05:43.401 --rc genhtml_legend=1 00:05:43.401 --rc geninfo_all_blocks=1 00:05:43.401 --rc geninfo_unexecuted_blocks=1 00:05:43.401 00:05:43.401 ' 00:05:43.401 02:46:58 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:43.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.401 --rc genhtml_branch_coverage=1 00:05:43.401 --rc genhtml_function_coverage=1 00:05:43.401 --rc genhtml_legend=1 00:05:43.401 --rc geninfo_all_blocks=1 00:05:43.401 --rc geninfo_unexecuted_blocks=1 00:05:43.401 00:05:43.401 ' 00:05:43.402 02:46:58 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:43.402 02:46:58 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=110639 00:05:43.402 02:46:58 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 110639 00:05:43.402 02:46:58 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.402 02:46:58 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 110639 ']' 00:05:43.402 02:46:58 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.402 02:46:58 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.402 02:46:58 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.402 02:46:58 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.402 02:46:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.661 [2024-12-14 02:46:58.534979] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:43.661 [2024-12-14 02:46:58.535043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110639 ] 00:05:43.661 [2024-12-14 02:46:58.610126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.661 [2024-12-14 02:46:58.632095] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.920 02:46:58 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.920 02:46:58 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:43.920 02:46:58 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:44.180 02:46:59 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 110639 00:05:44.180 02:46:59 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 110639 ']' 00:05:44.180 02:46:59 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 110639 00:05:44.180 02:46:59 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:44.180 02:46:59 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.180 02:46:59 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110639 00:05:44.180 02:46:59 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.180 02:46:59 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.180 02:46:59 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110639' 00:05:44.180 killing process with pid 110639 00:05:44.180 02:46:59 alias_rpc -- common/autotest_common.sh@973 -- # kill 110639 00:05:44.180 02:46:59 alias_rpc -- common/autotest_common.sh@978 -- # wait 110639 00:05:44.440 00:05:44.440 real 0m1.095s 00:05:44.440 user 0m1.106s 00:05:44.440 sys 0m0.429s 00:05:44.440 02:46:59 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.440 02:46:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.440 ************************************ 00:05:44.440 END TEST alias_rpc 00:05:44.440 ************************************ 00:05:44.440 02:46:59 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:44.440 02:46:59 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:44.440 02:46:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.440 02:46:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.440 02:46:59 -- common/autotest_common.sh@10 -- # set +x 00:05:44.440 ************************************ 00:05:44.440 START TEST spdkcli_tcp 00:05:44.440 ************************************ 00:05:44.440 02:46:59 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:44.440 * Looking for test storage... 00:05:44.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:44.440 02:46:59 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:44.440 02:46:59 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:44.440 02:46:59 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:44.700 02:46:59 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.700 02:46:59 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:44.701 02:46:59 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.701 02:46:59 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:44.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.701 --rc genhtml_branch_coverage=1 00:05:44.701 --rc genhtml_function_coverage=1 00:05:44.701 --rc genhtml_legend=1 00:05:44.701 --rc geninfo_all_blocks=1 00:05:44.701 --rc geninfo_unexecuted_blocks=1 00:05:44.701 00:05:44.701 ' 00:05:44.701 02:46:59 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:44.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.701 --rc genhtml_branch_coverage=1 00:05:44.701 --rc genhtml_function_coverage=1 00:05:44.701 --rc genhtml_legend=1 00:05:44.701 --rc geninfo_all_blocks=1 00:05:44.701 --rc geninfo_unexecuted_blocks=1 00:05:44.701 00:05:44.701 ' 00:05:44.701 02:46:59 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:44.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.701 --rc genhtml_branch_coverage=1 00:05:44.701 --rc genhtml_function_coverage=1 00:05:44.701 --rc genhtml_legend=1 00:05:44.701 --rc geninfo_all_blocks=1 00:05:44.701 --rc geninfo_unexecuted_blocks=1 00:05:44.701 00:05:44.701 ' 00:05:44.701 02:46:59 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:44.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.701 --rc genhtml_branch_coverage=1 00:05:44.701 --rc genhtml_function_coverage=1 00:05:44.701 --rc genhtml_legend=1 00:05:44.701 --rc geninfo_all_blocks=1 00:05:44.701 --rc geninfo_unexecuted_blocks=1 00:05:44.701 00:05:44.701 ' 00:05:44.701 02:46:59 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:44.701 02:46:59 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:44.701 02:46:59 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:44.701 02:46:59 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:44.701 02:46:59 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:44.701 02:46:59 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:44.701 02:46:59 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:44.701 02:46:59 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:44.701 02:46:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.701 02:46:59 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=110830 00:05:44.701 02:46:59 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 110830 00:05:44.701 02:46:59 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:44.701 02:46:59 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 110830 ']' 00:05:44.701 02:46:59 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.701 02:46:59 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.701 02:46:59 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.701 02:46:59 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.701 02:46:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.701 [2024-12-14 02:46:59.698490] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:44.701 [2024-12-14 02:46:59.698538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110830 ] 00:05:44.701 [2024-12-14 02:46:59.754453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.701 [2024-12-14 02:46:59.778592] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.701 [2024-12-14 02:46:59.778595] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.960 02:46:59 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.960 02:46:59 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:44.960 02:46:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=110933 00:05:44.960 02:46:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:44.960 02:46:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:45.220 [ 00:05:45.220 "bdev_malloc_delete", 00:05:45.220 "bdev_malloc_create", 00:05:45.220 "bdev_null_resize", 00:05:45.220 "bdev_null_delete", 00:05:45.220 "bdev_null_create", 00:05:45.220 "bdev_nvme_cuse_unregister", 00:05:45.220 "bdev_nvme_cuse_register", 00:05:45.220 "bdev_opal_new_user", 00:05:45.220 "bdev_opal_set_lock_state", 00:05:45.220 "bdev_opal_delete", 00:05:45.220 "bdev_opal_get_info", 00:05:45.220 "bdev_opal_create", 00:05:45.220 "bdev_nvme_opal_revert", 00:05:45.220 "bdev_nvme_opal_init", 00:05:45.220 "bdev_nvme_send_cmd", 00:05:45.220 "bdev_nvme_set_keys", 00:05:45.220 "bdev_nvme_get_path_iostat", 00:05:45.220 "bdev_nvme_get_mdns_discovery_info", 00:05:45.220 "bdev_nvme_stop_mdns_discovery", 00:05:45.220 "bdev_nvme_start_mdns_discovery", 00:05:45.220 "bdev_nvme_set_multipath_policy", 00:05:45.220 "bdev_nvme_set_preferred_path", 00:05:45.220 "bdev_nvme_get_io_paths", 00:05:45.220 "bdev_nvme_remove_error_injection", 00:05:45.220 "bdev_nvme_add_error_injection", 00:05:45.220 "bdev_nvme_get_discovery_info", 00:05:45.220 "bdev_nvme_stop_discovery", 00:05:45.220 "bdev_nvme_start_discovery", 00:05:45.220 "bdev_nvme_get_controller_health_info", 00:05:45.220 "bdev_nvme_disable_controller", 00:05:45.220 "bdev_nvme_enable_controller", 00:05:45.220 "bdev_nvme_reset_controller", 00:05:45.220 "bdev_nvme_get_transport_statistics", 00:05:45.220 "bdev_nvme_apply_firmware", 00:05:45.220 "bdev_nvme_detach_controller", 00:05:45.220 "bdev_nvme_get_controllers", 00:05:45.220 "bdev_nvme_attach_controller", 00:05:45.220 "bdev_nvme_set_hotplug", 00:05:45.220 "bdev_nvme_set_options", 00:05:45.220 "bdev_passthru_delete", 00:05:45.220 "bdev_passthru_create", 00:05:45.220 "bdev_lvol_set_parent_bdev", 00:05:45.220 "bdev_lvol_set_parent", 00:05:45.220 "bdev_lvol_check_shallow_copy", 00:05:45.220 "bdev_lvol_start_shallow_copy", 00:05:45.220 "bdev_lvol_grow_lvstore", 00:05:45.220 "bdev_lvol_get_lvols", 00:05:45.220 "bdev_lvol_get_lvstores", 00:05:45.220 "bdev_lvol_delete", 00:05:45.220 "bdev_lvol_set_read_only", 00:05:45.220 "bdev_lvol_resize", 00:05:45.220 "bdev_lvol_decouple_parent", 00:05:45.220 "bdev_lvol_inflate", 00:05:45.220 "bdev_lvol_rename", 00:05:45.220 "bdev_lvol_clone_bdev", 00:05:45.220 "bdev_lvol_clone", 00:05:45.220 "bdev_lvol_snapshot", 00:05:45.220 "bdev_lvol_create", 00:05:45.220 "bdev_lvol_delete_lvstore", 00:05:45.220 "bdev_lvol_rename_lvstore", 00:05:45.220 "bdev_lvol_create_lvstore", 00:05:45.220 "bdev_raid_set_options", 00:05:45.220 "bdev_raid_remove_base_bdev", 00:05:45.220 "bdev_raid_add_base_bdev", 00:05:45.220 "bdev_raid_delete", 00:05:45.220 "bdev_raid_create", 00:05:45.220 "bdev_raid_get_bdevs", 00:05:45.220 "bdev_error_inject_error", 00:05:45.220 "bdev_error_delete", 00:05:45.220 "bdev_error_create", 00:05:45.220 "bdev_split_delete", 00:05:45.220 "bdev_split_create", 00:05:45.220 "bdev_delay_delete", 00:05:45.220 "bdev_delay_create", 00:05:45.220 "bdev_delay_update_latency", 00:05:45.220 "bdev_zone_block_delete", 00:05:45.220 "bdev_zone_block_create", 00:05:45.220 "blobfs_create", 00:05:45.220 "blobfs_detect", 00:05:45.220 "blobfs_set_cache_size", 00:05:45.220 "bdev_aio_delete", 00:05:45.220 "bdev_aio_rescan", 00:05:45.220 "bdev_aio_create", 00:05:45.220 "bdev_ftl_set_property", 00:05:45.220 "bdev_ftl_get_properties", 00:05:45.220 "bdev_ftl_get_stats", 00:05:45.220 "bdev_ftl_unmap", 00:05:45.220 "bdev_ftl_unload", 00:05:45.220 "bdev_ftl_delete", 00:05:45.220 "bdev_ftl_load", 00:05:45.220 "bdev_ftl_create", 00:05:45.220 "bdev_virtio_attach_controller", 00:05:45.220 "bdev_virtio_scsi_get_devices", 00:05:45.220 "bdev_virtio_detach_controller", 00:05:45.220 "bdev_virtio_blk_set_hotplug", 00:05:45.220 "bdev_iscsi_delete", 00:05:45.220 "bdev_iscsi_create", 00:05:45.220 "bdev_iscsi_set_options", 00:05:45.220 "accel_error_inject_error", 00:05:45.220 "ioat_scan_accel_module", 00:05:45.220 "dsa_scan_accel_module", 00:05:45.220 "iaa_scan_accel_module", 00:05:45.220 "vfu_virtio_create_fs_endpoint", 00:05:45.220 "vfu_virtio_create_scsi_endpoint", 00:05:45.220 "vfu_virtio_scsi_remove_target", 00:05:45.220 "vfu_virtio_scsi_add_target", 00:05:45.220 "vfu_virtio_create_blk_endpoint", 00:05:45.220 "vfu_virtio_delete_endpoint", 00:05:45.220 "keyring_file_remove_key", 00:05:45.220 "keyring_file_add_key", 00:05:45.220 "keyring_linux_set_options", 00:05:45.220 "fsdev_aio_delete", 00:05:45.220 "fsdev_aio_create", 00:05:45.220 "iscsi_get_histogram", 00:05:45.220 "iscsi_enable_histogram", 00:05:45.220 "iscsi_set_options", 00:05:45.220 "iscsi_get_auth_groups", 00:05:45.220 "iscsi_auth_group_remove_secret", 00:05:45.220 "iscsi_auth_group_add_secret", 00:05:45.220 "iscsi_delete_auth_group", 00:05:45.220 "iscsi_create_auth_group", 00:05:45.220 "iscsi_set_discovery_auth", 00:05:45.220 "iscsi_get_options", 00:05:45.220 "iscsi_target_node_request_logout", 00:05:45.220 "iscsi_target_node_set_redirect", 00:05:45.220 "iscsi_target_node_set_auth", 00:05:45.220 "iscsi_target_node_add_lun", 00:05:45.220 "iscsi_get_stats", 00:05:45.220 "iscsi_get_connections", 00:05:45.220 "iscsi_portal_group_set_auth", 00:05:45.220 "iscsi_start_portal_group", 00:05:45.220 "iscsi_delete_portal_group", 00:05:45.220 "iscsi_create_portal_group", 00:05:45.220 "iscsi_get_portal_groups", 00:05:45.220 "iscsi_delete_target_node", 00:05:45.220 "iscsi_target_node_remove_pg_ig_maps", 00:05:45.220 "iscsi_target_node_add_pg_ig_maps", 00:05:45.220 "iscsi_create_target_node", 00:05:45.220 "iscsi_get_target_nodes", 00:05:45.220 "iscsi_delete_initiator_group", 00:05:45.220 "iscsi_initiator_group_remove_initiators", 00:05:45.220 "iscsi_initiator_group_add_initiators", 00:05:45.220 "iscsi_create_initiator_group", 00:05:45.220 "iscsi_get_initiator_groups", 00:05:45.220 "nvmf_set_crdt", 00:05:45.220 "nvmf_set_config", 00:05:45.220 "nvmf_set_max_subsystems", 00:05:45.220 "nvmf_stop_mdns_prr", 00:05:45.220 "nvmf_publish_mdns_prr", 00:05:45.220 "nvmf_subsystem_get_listeners", 00:05:45.220 "nvmf_subsystem_get_qpairs", 00:05:45.220 "nvmf_subsystem_get_controllers", 00:05:45.220 "nvmf_get_stats", 00:05:45.220 "nvmf_get_transports", 00:05:45.220 "nvmf_create_transport", 00:05:45.220 "nvmf_get_targets", 00:05:45.220 "nvmf_delete_target", 00:05:45.220 "nvmf_create_target", 00:05:45.220 "nvmf_subsystem_allow_any_host", 00:05:45.220 "nvmf_subsystem_set_keys", 00:05:45.221 "nvmf_subsystem_remove_host", 00:05:45.221 "nvmf_subsystem_add_host", 00:05:45.221 "nvmf_ns_remove_host", 00:05:45.221 "nvmf_ns_add_host", 00:05:45.221 "nvmf_subsystem_remove_ns", 00:05:45.221 "nvmf_subsystem_set_ns_ana_group", 00:05:45.221 "nvmf_subsystem_add_ns", 00:05:45.221 "nvmf_subsystem_listener_set_ana_state", 00:05:45.221 "nvmf_discovery_get_referrals", 00:05:45.221 "nvmf_discovery_remove_referral", 00:05:45.221 "nvmf_discovery_add_referral", 00:05:45.221 "nvmf_subsystem_remove_listener", 00:05:45.221 "nvmf_subsystem_add_listener", 00:05:45.221 "nvmf_delete_subsystem", 00:05:45.221 "nvmf_create_subsystem", 00:05:45.221 "nvmf_get_subsystems", 00:05:45.221 "env_dpdk_get_mem_stats", 00:05:45.221 "nbd_get_disks", 00:05:45.221 "nbd_stop_disk", 00:05:45.221 "nbd_start_disk", 00:05:45.221 "ublk_recover_disk", 00:05:45.221 "ublk_get_disks", 00:05:45.221 "ublk_stop_disk", 00:05:45.221 "ublk_start_disk", 00:05:45.221 "ublk_destroy_target", 00:05:45.221 "ublk_create_target", 00:05:45.221 "virtio_blk_create_transport", 00:05:45.221 "virtio_blk_get_transports", 00:05:45.221 "vhost_controller_set_coalescing", 00:05:45.221 "vhost_get_controllers", 00:05:45.221 "vhost_delete_controller", 00:05:45.221 "vhost_create_blk_controller", 00:05:45.221 "vhost_scsi_controller_remove_target", 00:05:45.221 "vhost_scsi_controller_add_target", 00:05:45.221 "vhost_start_scsi_controller", 00:05:45.221 "vhost_create_scsi_controller", 00:05:45.221 "thread_set_cpumask", 00:05:45.221 "scheduler_set_options", 00:05:45.221 "framework_get_governor", 00:05:45.221 "framework_get_scheduler", 00:05:45.221 "framework_set_scheduler", 00:05:45.221 "framework_get_reactors", 00:05:45.221 "thread_get_io_channels", 00:05:45.221 "thread_get_pollers", 00:05:45.221 "thread_get_stats", 00:05:45.221 "framework_monitor_context_switch", 00:05:45.221 "spdk_kill_instance", 00:05:45.221 "log_enable_timestamps", 00:05:45.221 "log_get_flags", 00:05:45.221 "log_clear_flag", 00:05:45.221 "log_set_flag", 00:05:45.221 "log_get_level", 00:05:45.221 "log_set_level", 00:05:45.221 "log_get_print_level", 00:05:45.221 "log_set_print_level", 00:05:45.221 "framework_enable_cpumask_locks", 00:05:45.221 "framework_disable_cpumask_locks", 00:05:45.221 "framework_wait_init", 00:05:45.221 "framework_start_init", 00:05:45.221 "scsi_get_devices", 00:05:45.221 "bdev_get_histogram", 00:05:45.221 "bdev_enable_histogram", 00:05:45.221 "bdev_set_qos_limit", 00:05:45.221 "bdev_set_qd_sampling_period", 00:05:45.221 "bdev_get_bdevs", 00:05:45.221 "bdev_reset_iostat", 00:05:45.221 "bdev_get_iostat", 00:05:45.221 "bdev_examine", 00:05:45.221 "bdev_wait_for_examine", 00:05:45.221 "bdev_set_options", 00:05:45.221 "accel_get_stats", 00:05:45.221 "accel_set_options", 00:05:45.221 "accel_set_driver", 00:05:45.221 "accel_crypto_key_destroy", 00:05:45.221 "accel_crypto_keys_get", 00:05:45.221 "accel_crypto_key_create", 00:05:45.221 "accel_assign_opc", 00:05:45.221 "accel_get_module_info", 00:05:45.221 "accel_get_opc_assignments", 00:05:45.221 "vmd_rescan", 00:05:45.221 "vmd_remove_device", 00:05:45.221 "vmd_enable", 00:05:45.221 "sock_get_default_impl", 00:05:45.221 "sock_set_default_impl", 00:05:45.221 "sock_impl_set_options", 00:05:45.221 "sock_impl_get_options", 00:05:45.221 "iobuf_get_stats", 00:05:45.221 "iobuf_set_options", 00:05:45.221 "keyring_get_keys", 00:05:45.221 "vfu_tgt_set_base_path", 00:05:45.221 "framework_get_pci_devices", 00:05:45.221 "framework_get_config", 00:05:45.221 "framework_get_subsystems", 00:05:45.221 "fsdev_set_opts", 00:05:45.221 "fsdev_get_opts", 00:05:45.221 "trace_get_info", 00:05:45.221 "trace_get_tpoint_group_mask", 00:05:45.221 "trace_disable_tpoint_group", 00:05:45.221 "trace_enable_tpoint_group", 00:05:45.221 "trace_clear_tpoint_mask", 00:05:45.221 "trace_set_tpoint_mask", 00:05:45.221 "notify_get_notifications", 00:05:45.221 "notify_get_types", 00:05:45.221 "spdk_get_version", 00:05:45.221 "rpc_get_methods" 00:05:45.221 ] 00:05:45.221 02:47:00 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:45.221 02:47:00 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:45.221 02:47:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.221 02:47:00 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:45.221 02:47:00 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 110830 00:05:45.221 02:47:00 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 110830 ']' 00:05:45.221 02:47:00 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 110830 00:05:45.221 02:47:00 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:45.221 02:47:00 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.221 02:47:00 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110830 00:05:45.221 02:47:00 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.221 02:47:00 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.221 02:47:00 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110830' 00:05:45.221 killing process with pid 110830 00:05:45.221 02:47:00 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 110830 00:05:45.221 02:47:00 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 110830 00:05:45.481 00:05:45.481 real 0m1.091s 00:05:45.481 user 0m1.851s 00:05:45.481 sys 0m0.446s 00:05:45.481 02:47:00 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.481 02:47:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.481 ************************************ 00:05:45.481 END TEST spdkcli_tcp 00:05:45.481 ************************************ 00:05:45.481 02:47:00 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:45.481 02:47:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.481 02:47:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.481 02:47:00 -- common/autotest_common.sh@10 -- # set +x 00:05:45.740 ************************************ 00:05:45.740 START TEST dpdk_mem_utility 00:05:45.740 ************************************ 00:05:45.740 02:47:00 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:45.740 * Looking for test storage... 00:05:45.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:45.740 02:47:00 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:45.740 02:47:00 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:45.740 02:47:00 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:45.740 02:47:00 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:45.740 02:47:00 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.740 02:47:00 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.740 02:47:00 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.740 02:47:00 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.740 02:47:00 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.740 02:47:00 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.740 02:47:00 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.740 02:47:00 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.740 02:47:00 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.740 02:47:00 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.740 02:47:00 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.740 02:47:00 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:45.740 02:47:00 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:45.740 02:47:00 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.740 02:47:00 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.741 02:47:00 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:45.741 02:47:00 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:45.741 02:47:00 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.741 02:47:00 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:45.741 02:47:00 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.741 02:47:00 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:45.741 02:47:00 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:45.741 02:47:00 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.741 02:47:00 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:45.741 02:47:00 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.741 02:47:00 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.741 02:47:00 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.741 02:47:00 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:45.741 02:47:00 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.741 02:47:00 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:45.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.741 --rc genhtml_branch_coverage=1 00:05:45.741 --rc genhtml_function_coverage=1 00:05:45.741 --rc genhtml_legend=1 00:05:45.741 --rc geninfo_all_blocks=1 00:05:45.741 --rc geninfo_unexecuted_blocks=1 00:05:45.741 00:05:45.741 ' 00:05:45.741 02:47:00 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:45.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.741 --rc genhtml_branch_coverage=1 00:05:45.741 --rc genhtml_function_coverage=1 00:05:45.741 --rc genhtml_legend=1 00:05:45.741 --rc geninfo_all_blocks=1 00:05:45.741 --rc geninfo_unexecuted_blocks=1 00:05:45.741 00:05:45.741 ' 00:05:45.741 02:47:00 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:45.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.741 --rc genhtml_branch_coverage=1 00:05:45.741 --rc genhtml_function_coverage=1 00:05:45.741 --rc genhtml_legend=1 00:05:45.741 --rc geninfo_all_blocks=1 00:05:45.741 --rc geninfo_unexecuted_blocks=1 00:05:45.741 00:05:45.741 ' 00:05:45.741 02:47:00 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:45.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.741 --rc genhtml_branch_coverage=1 00:05:45.741 --rc genhtml_function_coverage=1 00:05:45.741 --rc genhtml_legend=1 00:05:45.741 --rc geninfo_all_blocks=1 00:05:45.741 --rc geninfo_unexecuted_blocks=1 00:05:45.741 00:05:45.741 ' 00:05:45.741 02:47:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:45.741 02:47:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=111020 00:05:45.741 02:47:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 111020 00:05:45.741 02:47:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.741 02:47:00 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 111020 ']' 00:05:45.741 02:47:00 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.741 02:47:00 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.741 02:47:00 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.741 02:47:00 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.741 02:47:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:45.741 [2024-12-14 02:47:00.851106] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:45.741 [2024-12-14 02:47:00.851153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111020 ] 00:05:46.000 [2024-12-14 02:47:00.927991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.000 [2024-12-14 02:47:00.950468] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.261 02:47:01 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.261 02:47:01 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:46.261 02:47:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:46.261 02:47:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:46.261 02:47:01 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.261 02:47:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.261 { 00:05:46.261 "filename": "/tmp/spdk_mem_dump.txt" 00:05:46.261 } 00:05:46.261 02:47:01 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.261 02:47:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:46.261 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:46.261 1 heaps totaling size 818.000000 MiB 00:05:46.261 size: 818.000000 MiB heap id: 0 00:05:46.261 end heaps---------- 00:05:46.261 9 mempools totaling size 603.782043 MiB 00:05:46.261 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:46.261 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:46.261 size: 100.555481 MiB name: bdev_io_111020 00:05:46.261 size: 50.003479 MiB name: msgpool_111020 00:05:46.261 size: 36.509338 MiB name: fsdev_io_111020 00:05:46.261 size: 21.763794 MiB name: PDU_Pool 00:05:46.261 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:46.261 size: 4.133484 MiB name: evtpool_111020 00:05:46.261 size: 0.026123 MiB name: Session_Pool 00:05:46.261 end mempools------- 00:05:46.261 6 memzones totaling size 4.142822 MiB 00:05:46.261 size: 1.000366 MiB name: RG_ring_0_111020 00:05:46.261 size: 1.000366 MiB name: RG_ring_1_111020 00:05:46.261 size: 1.000366 MiB name: RG_ring_4_111020 00:05:46.261 size: 1.000366 MiB name: RG_ring_5_111020 00:05:46.261 size: 0.125366 MiB name: RG_ring_2_111020 00:05:46.261 size: 0.015991 MiB name: RG_ring_3_111020 00:05:46.261 end memzones------- 00:05:46.261 02:47:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:46.261 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:46.261 list of free elements. size: 10.852478 MiB 00:05:46.261 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:46.261 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:46.262 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:46.262 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:46.262 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:46.262 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:46.262 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:46.262 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:46.262 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:46.262 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:46.262 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:46.262 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:46.262 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:46.262 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:46.262 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:46.262 list of standard malloc elements. size: 199.218628 MiB 00:05:46.262 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:46.262 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:46.262 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:46.262 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:46.262 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:46.262 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:46.262 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:46.262 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:46.262 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:46.262 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:46.262 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:46.262 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:46.262 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:46.262 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:46.262 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:46.262 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:46.262 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:46.262 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:46.262 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:46.262 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:46.262 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:46.262 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:46.262 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:46.262 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:46.262 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:46.262 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:46.262 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:46.262 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:46.262 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:46.262 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:46.262 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:46.262 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:46.262 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:46.262 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:46.262 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:46.262 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:46.262 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:46.262 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:46.262 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:46.262 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:46.262 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:46.262 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:46.262 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:46.262 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:46.262 list of memzone associated elements. size: 607.928894 MiB 00:05:46.262 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:46.262 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:46.262 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:46.262 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:46.262 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:46.262 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_111020_0 00:05:46.262 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:46.262 associated memzone info: size: 48.002930 MiB name: MP_msgpool_111020_0 00:05:46.262 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:46.262 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_111020_0 00:05:46.262 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:46.262 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:46.262 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:46.262 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:46.262 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:46.262 associated memzone info: size: 3.000122 MiB name: MP_evtpool_111020_0 00:05:46.262 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:46.262 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_111020 00:05:46.262 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:46.262 associated memzone info: size: 1.007996 MiB name: MP_evtpool_111020 00:05:46.262 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:46.262 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:46.262 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:46.262 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:46.262 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:46.262 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:46.262 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:46.262 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:46.262 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:46.262 associated memzone info: size: 1.000366 MiB name: RG_ring_0_111020 00:05:46.262 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:46.262 associated memzone info: size: 1.000366 MiB name: RG_ring_1_111020 00:05:46.262 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:46.262 associated memzone info: size: 1.000366 MiB name: RG_ring_4_111020 00:05:46.262 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:46.262 associated memzone info: size: 1.000366 MiB name: RG_ring_5_111020 00:05:46.262 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:46.262 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_111020 00:05:46.262 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:46.262 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_111020 00:05:46.262 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:46.262 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:46.262 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:46.262 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:46.262 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:46.262 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:46.262 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:46.262 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_111020 00:05:46.262 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:46.262 associated memzone info: size: 0.125366 MiB name: RG_ring_2_111020 00:05:46.262 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:46.262 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:46.262 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:46.262 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:46.262 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:46.262 associated memzone info: size: 0.015991 MiB name: RG_ring_3_111020 00:05:46.262 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:46.262 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:46.262 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:46.262 associated memzone info: size: 0.000183 MiB name: MP_msgpool_111020 00:05:46.262 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:46.262 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_111020 00:05:46.262 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:46.262 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_111020 00:05:46.262 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:46.262 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:46.262 02:47:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:46.262 02:47:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 111020 00:05:46.262 02:47:01 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 111020 ']' 00:05:46.262 02:47:01 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 111020 00:05:46.262 02:47:01 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:46.262 02:47:01 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.262 02:47:01 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111020 00:05:46.262 02:47:01 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.262 02:47:01 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.262 02:47:01 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111020' 00:05:46.262 killing process with pid 111020 00:05:46.262 02:47:01 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 111020 00:05:46.262 02:47:01 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 111020 00:05:46.522 00:05:46.523 real 0m0.977s 00:05:46.523 user 0m0.909s 00:05:46.523 sys 0m0.416s 00:05:46.523 02:47:01 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.523 02:47:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.523 ************************************ 00:05:46.523 END TEST dpdk_mem_utility 00:05:46.523 ************************************ 00:05:46.523 02:47:01 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:46.523 02:47:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.523 02:47:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.523 02:47:01 -- common/autotest_common.sh@10 -- # set +x 00:05:46.783 ************************************ 00:05:46.783 START TEST event 00:05:46.783 ************************************ 00:05:46.783 02:47:01 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:46.783 * Looking for test storage... 00:05:46.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:46.783 02:47:01 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:46.783 02:47:01 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:46.783 02:47:01 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:46.783 02:47:01 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:46.783 02:47:01 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.783 02:47:01 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.783 02:47:01 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.783 02:47:01 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.783 02:47:01 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.783 02:47:01 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.783 02:47:01 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.783 02:47:01 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.783 02:47:01 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.783 02:47:01 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.783 02:47:01 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.783 02:47:01 event -- scripts/common.sh@344 -- # case "$op" in 00:05:46.783 02:47:01 event -- scripts/common.sh@345 -- # : 1 00:05:46.783 02:47:01 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.783 02:47:01 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.783 02:47:01 event -- scripts/common.sh@365 -- # decimal 1 00:05:46.783 02:47:01 event -- scripts/common.sh@353 -- # local d=1 00:05:46.783 02:47:01 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.783 02:47:01 event -- scripts/common.sh@355 -- # echo 1 00:05:46.783 02:47:01 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.783 02:47:01 event -- scripts/common.sh@366 -- # decimal 2 00:05:46.783 02:47:01 event -- scripts/common.sh@353 -- # local d=2 00:05:46.783 02:47:01 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.783 02:47:01 event -- scripts/common.sh@355 -- # echo 2 00:05:46.783 02:47:01 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.783 02:47:01 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.783 02:47:01 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.783 02:47:01 event -- scripts/common.sh@368 -- # return 0 00:05:46.783 02:47:01 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.783 02:47:01 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:46.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.783 --rc genhtml_branch_coverage=1 00:05:46.783 --rc genhtml_function_coverage=1 00:05:46.783 --rc genhtml_legend=1 00:05:46.783 --rc geninfo_all_blocks=1 00:05:46.783 --rc geninfo_unexecuted_blocks=1 00:05:46.783 00:05:46.783 ' 00:05:46.783 02:47:01 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:46.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.783 --rc genhtml_branch_coverage=1 00:05:46.783 --rc genhtml_function_coverage=1 00:05:46.783 --rc genhtml_legend=1 00:05:46.783 --rc geninfo_all_blocks=1 00:05:46.783 --rc geninfo_unexecuted_blocks=1 00:05:46.783 00:05:46.783 ' 00:05:46.783 02:47:01 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:46.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.783 --rc genhtml_branch_coverage=1 00:05:46.783 --rc genhtml_function_coverage=1 00:05:46.783 --rc genhtml_legend=1 00:05:46.783 --rc geninfo_all_blocks=1 00:05:46.783 --rc geninfo_unexecuted_blocks=1 00:05:46.783 00:05:46.783 ' 00:05:46.783 02:47:01 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:46.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.783 --rc genhtml_branch_coverage=1 00:05:46.783 --rc genhtml_function_coverage=1 00:05:46.783 --rc genhtml_legend=1 00:05:46.783 --rc geninfo_all_blocks=1 00:05:46.783 --rc geninfo_unexecuted_blocks=1 00:05:46.783 00:05:46.783 ' 00:05:46.783 02:47:01 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:46.783 02:47:01 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:46.783 02:47:01 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.783 02:47:01 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:46.783 02:47:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.783 02:47:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.783 ************************************ 00:05:46.783 START TEST event_perf 00:05:46.783 ************************************ 00:05:46.783 02:47:01 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.783 Running I/O for 1 seconds...[2024-12-14 02:47:01.898030] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:46.783 [2024-12-14 02:47:01.898101] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111304 ] 00:05:47.043 [2024-12-14 02:47:01.976937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:47.043 [2024-12-14 02:47:02.002556] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.043 [2024-12-14 02:47:02.002665] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.043 [2024-12-14 02:47:02.002749] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.043 [2024-12-14 02:47:02.002750] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.980 Running I/O for 1 seconds... 00:05:47.980 lcore 0: 204779 00:05:47.980 lcore 1: 204778 00:05:47.980 lcore 2: 204779 00:05:47.980 lcore 3: 204779 00:05:47.980 done. 00:05:47.980 00:05:47.980 real 0m1.164s 00:05:47.980 user 0m4.077s 00:05:47.980 sys 0m0.082s 00:05:47.980 02:47:03 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.980 02:47:03 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:47.980 ************************************ 00:05:47.980 END TEST event_perf 00:05:47.980 ************************************ 00:05:47.980 02:47:03 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:47.980 02:47:03 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:47.980 02:47:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.980 02:47:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.240 ************************************ 00:05:48.240 START TEST event_reactor 00:05:48.240 ************************************ 00:05:48.240 02:47:03 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:48.240 [2024-12-14 02:47:03.133894] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:48.240 [2024-12-14 02:47:03.133966] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111550 ] 00:05:48.240 [2024-12-14 02:47:03.212707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.240 [2024-12-14 02:47:03.234935] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.178 test_start 00:05:49.178 oneshot 00:05:49.178 tick 100 00:05:49.178 tick 100 00:05:49.178 tick 250 00:05:49.178 tick 100 00:05:49.178 tick 100 00:05:49.178 tick 100 00:05:49.178 tick 250 00:05:49.178 tick 500 00:05:49.178 tick 100 00:05:49.178 tick 100 00:05:49.178 tick 250 00:05:49.178 tick 100 00:05:49.178 tick 100 00:05:49.178 test_end 00:05:49.178 00:05:49.178 real 0m1.153s 00:05:49.178 user 0m1.070s 00:05:49.178 sys 0m0.079s 00:05:49.178 02:47:04 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.178 02:47:04 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:49.178 ************************************ 00:05:49.178 END TEST event_reactor 00:05:49.178 ************************************ 00:05:49.178 02:47:04 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:49.178 02:47:04 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:49.178 02:47:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.178 02:47:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.437 ************************************ 00:05:49.437 START TEST event_reactor_perf 00:05:49.437 ************************************ 00:05:49.437 02:47:04 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:49.437 [2024-12-14 02:47:04.359937] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:49.437 [2024-12-14 02:47:04.360020] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111790 ] 00:05:49.437 [2024-12-14 02:47:04.437426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.437 [2024-12-14 02:47:04.459581] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.375 test_start 00:05:50.375 test_end 00:05:50.375 Performance: 511360 events per second 00:05:50.375 00:05:50.375 real 0m1.152s 00:05:50.375 user 0m1.068s 00:05:50.375 sys 0m0.079s 00:05:50.375 02:47:05 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.375 02:47:05 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.375 ************************************ 00:05:50.375 END TEST event_reactor_perf 00:05:50.375 ************************************ 00:05:50.634 02:47:05 event -- event/event.sh@49 -- # uname -s 00:05:50.634 02:47:05 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:50.634 02:47:05 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:50.634 02:47:05 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.634 02:47:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.634 02:47:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.634 ************************************ 00:05:50.634 START TEST event_scheduler 00:05:50.634 ************************************ 00:05:50.634 02:47:05 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:50.634 * Looking for test storage... 00:05:50.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:50.634 02:47:05 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:50.634 02:47:05 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:50.634 02:47:05 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:50.634 02:47:05 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.634 02:47:05 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:50.634 02:47:05 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.634 02:47:05 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:50.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.634 --rc genhtml_branch_coverage=1 00:05:50.634 --rc genhtml_function_coverage=1 00:05:50.634 --rc genhtml_legend=1 00:05:50.634 --rc geninfo_all_blocks=1 00:05:50.634 --rc geninfo_unexecuted_blocks=1 00:05:50.634 00:05:50.634 ' 00:05:50.634 02:47:05 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:50.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.634 --rc genhtml_branch_coverage=1 00:05:50.634 --rc genhtml_function_coverage=1 00:05:50.634 --rc genhtml_legend=1 00:05:50.634 --rc geninfo_all_blocks=1 00:05:50.634 --rc geninfo_unexecuted_blocks=1 00:05:50.634 00:05:50.634 ' 00:05:50.634 02:47:05 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:50.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.634 --rc genhtml_branch_coverage=1 00:05:50.634 --rc genhtml_function_coverage=1 00:05:50.634 --rc genhtml_legend=1 00:05:50.634 --rc geninfo_all_blocks=1 00:05:50.634 --rc geninfo_unexecuted_blocks=1 00:05:50.634 00:05:50.634 ' 00:05:50.635 02:47:05 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:50.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.635 --rc genhtml_branch_coverage=1 00:05:50.635 --rc genhtml_function_coverage=1 00:05:50.635 --rc genhtml_legend=1 00:05:50.635 --rc geninfo_all_blocks=1 00:05:50.635 --rc geninfo_unexecuted_blocks=1 00:05:50.635 00:05:50.635 ' 00:05:50.635 02:47:05 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:50.635 02:47:05 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=112066 00:05:50.635 02:47:05 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.635 02:47:05 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:50.635 02:47:05 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 112066 00:05:50.635 02:47:05 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 112066 ']' 00:05:50.635 02:47:05 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.635 02:47:05 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.635 02:47:05 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.635 02:47:05 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.635 02:47:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.894 [2024-12-14 02:47:05.773303] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:50.894 [2024-12-14 02:47:05.773371] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112066 ] 00:05:50.894 [2024-12-14 02:47:05.848439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:50.894 [2024-12-14 02:47:05.874337] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.894 [2024-12-14 02:47:05.874447] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.894 [2024-12-14 02:47:05.874551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.894 [2024-12-14 02:47:05.874552] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.894 02:47:05 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.894 02:47:05 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:50.894 02:47:05 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:50.894 02:47:05 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.894 02:47:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.894 [2024-12-14 02:47:05.931166] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:50.894 [2024-12-14 02:47:05.931182] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:50.894 [2024-12-14 02:47:05.931190] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:50.894 [2024-12-14 02:47:05.931195] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:50.894 [2024-12-14 02:47:05.931200] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:50.894 02:47:05 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.894 02:47:05 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:50.894 02:47:05 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.894 02:47:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.894 [2024-12-14 02:47:06.001143] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:50.894 02:47:06 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.894 02:47:06 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:50.894 02:47:06 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.894 02:47:06 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.894 02:47:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.153 ************************************ 00:05:51.153 START TEST scheduler_create_thread 00:05:51.153 ************************************ 00:05:51.153 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.154 2 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.154 3 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.154 4 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.154 5 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.154 6 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.154 7 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.154 8 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.154 9 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.154 10 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.154 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.722 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.722 02:47:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:51.722 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.722 02:47:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.098 02:47:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.098 02:47:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:53.098 02:47:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:53.098 02:47:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.098 02:47:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.035 02:47:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.035 00:05:54.035 real 0m3.100s 00:05:54.035 user 0m0.026s 00:05:54.035 sys 0m0.004s 00:05:54.035 02:47:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.035 02:47:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.035 ************************************ 00:05:54.035 END TEST scheduler_create_thread 00:05:54.035 ************************************ 00:05:54.294 02:47:09 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:54.294 02:47:09 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 112066 00:05:54.294 02:47:09 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 112066 ']' 00:05:54.294 02:47:09 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 112066 00:05:54.294 02:47:09 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:54.294 02:47:09 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.294 02:47:09 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112066 00:05:54.294 02:47:09 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:54.294 02:47:09 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:54.294 02:47:09 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112066' 00:05:54.294 killing process with pid 112066 00:05:54.294 02:47:09 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 112066 00:05:54.294 02:47:09 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 112066 00:05:54.553 [2024-12-14 02:47:09.520185] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:54.813 00:05:54.813 real 0m4.131s 00:05:54.813 user 0m6.644s 00:05:54.813 sys 0m0.384s 00:05:54.813 02:47:09 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.813 02:47:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.813 ************************************ 00:05:54.813 END TEST event_scheduler 00:05:54.813 ************************************ 00:05:54.813 02:47:09 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:54.813 02:47:09 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:54.813 02:47:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.813 02:47:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.813 02:47:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.813 ************************************ 00:05:54.813 START TEST app_repeat 00:05:54.813 ************************************ 00:05:54.813 02:47:09 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:54.813 02:47:09 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.813 02:47:09 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.813 02:47:09 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:54.813 02:47:09 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.813 02:47:09 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:54.813 02:47:09 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:54.813 02:47:09 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:54.813 02:47:09 event.app_repeat -- event/event.sh@19 -- # repeat_pid=112791 00:05:54.813 02:47:09 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.813 02:47:09 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:54.813 02:47:09 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 112791' 00:05:54.813 Process app_repeat pid: 112791 00:05:54.813 02:47:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.813 02:47:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:54.813 spdk_app_start Round 0 00:05:54.813 02:47:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 112791 /var/tmp/spdk-nbd.sock 00:05:54.813 02:47:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112791 ']' 00:05:54.813 02:47:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.813 02:47:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.813 02:47:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.813 02:47:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.813 02:47:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.813 [2024-12-14 02:47:09.813193] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:54.813 [2024-12-14 02:47:09.813261] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112791 ] 00:05:54.813 [2024-12-14 02:47:09.888921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.813 [2024-12-14 02:47:09.911262] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.813 [2024-12-14 02:47:09.911262] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.072 02:47:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.072 02:47:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:55.072 02:47:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.072 Malloc0 00:05:55.331 02:47:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.331 Malloc1 00:05:55.331 02:47:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.331 02:47:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.331 02:47:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.331 02:47:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.332 02:47:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.332 02:47:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.332 02:47:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.332 02:47:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.332 02:47:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.332 02:47:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.332 02:47:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.332 02:47:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.332 02:47:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:55.332 02:47:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.332 02:47:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.332 02:47:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.591 /dev/nbd0 00:05:55.591 02:47:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.591 02:47:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.591 02:47:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:55.591 02:47:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:55.591 02:47:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.591 02:47:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.591 02:47:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:55.591 02:47:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:55.591 02:47:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.591 02:47:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.591 02:47:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.591 1+0 records in 00:05:55.591 1+0 records out 00:05:55.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226612 s, 18.1 MB/s 00:05:55.591 02:47:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.591 02:47:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:55.591 02:47:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.591 02:47:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.591 02:47:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:55.591 02:47:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.591 02:47:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.591 02:47:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.850 /dev/nbd1 00:05:55.850 02:47:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.850 02:47:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.850 02:47:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:55.850 02:47:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:55.850 02:47:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.850 02:47:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.850 02:47:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:55.850 02:47:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:55.850 02:47:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.850 02:47:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.850 02:47:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.850 1+0 records in 00:05:55.850 1+0 records out 00:05:55.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203761 s, 20.1 MB/s 00:05:55.850 02:47:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.850 02:47:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:55.850 02:47:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.850 02:47:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.850 02:47:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:55.850 02:47:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.850 02:47:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.850 02:47:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.850 02:47:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.850 02:47:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.109 02:47:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.109 { 00:05:56.109 "nbd_device": "/dev/nbd0", 00:05:56.109 "bdev_name": "Malloc0" 00:05:56.109 }, 00:05:56.109 { 00:05:56.109 "nbd_device": "/dev/nbd1", 00:05:56.109 "bdev_name": "Malloc1" 00:05:56.109 } 00:05:56.109 ]' 00:05:56.109 02:47:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.109 { 00:05:56.109 "nbd_device": "/dev/nbd0", 00:05:56.109 "bdev_name": "Malloc0" 00:05:56.109 }, 00:05:56.109 { 00:05:56.109 "nbd_device": "/dev/nbd1", 00:05:56.109 "bdev_name": "Malloc1" 00:05:56.109 } 00:05:56.109 ]' 00:05:56.109 02:47:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.109 02:47:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.109 /dev/nbd1' 00:05:56.109 02:47:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.109 /dev/nbd1' 00:05:56.109 02:47:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.109 02:47:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.109 02:47:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.109 02:47:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.109 02:47:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.109 02:47:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.109 02:47:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.109 02:47:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.109 02:47:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.109 02:47:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.109 02:47:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.109 02:47:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.109 256+0 records in 00:05:56.109 256+0 records out 00:05:56.109 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106659 s, 98.3 MB/s 00:05:56.109 02:47:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.109 02:47:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.369 256+0 records in 00:05:56.369 256+0 records out 00:05:56.369 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141038 s, 74.3 MB/s 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.369 256+0 records in 00:05:56.369 256+0 records out 00:05:56.369 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145284 s, 72.2 MB/s 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.369 02:47:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.628 02:47:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.628 02:47:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.628 02:47:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.628 02:47:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.628 02:47:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.628 02:47:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.628 02:47:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.628 02:47:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.628 02:47:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.628 02:47:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.628 02:47:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.887 02:47:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.887 02:47:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.887 02:47:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.887 02:47:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.887 02:47:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.887 02:47:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.887 02:47:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:56.887 02:47:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.887 02:47:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.887 02:47:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.887 02:47:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.887 02:47:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.887 02:47:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.146 02:47:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:57.404 [2024-12-14 02:47:12.335629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.404 [2024-12-14 02:47:12.355627] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.404 [2024-12-14 02:47:12.355627] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.404 [2024-12-14 02:47:12.395963] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.404 [2024-12-14 02:47:12.396016] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.694 02:47:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.694 02:47:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:00.694 spdk_app_start Round 1 00:06:00.694 02:47:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 112791 /var/tmp/spdk-nbd.sock 00:06:00.694 02:47:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112791 ']' 00:06:00.694 02:47:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.694 02:47:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.694 02:47:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.694 02:47:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.694 02:47:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.694 02:47:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.694 02:47:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:00.694 02:47:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.694 Malloc0 00:06:00.694 02:47:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.694 Malloc1 00:06:00.694 02:47:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.694 02:47:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.694 02:47:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.694 02:47:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.694 02:47:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.694 02:47:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.694 02:47:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.694 02:47:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.694 02:47:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.694 02:47:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.694 02:47:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.694 02:47:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.694 02:47:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.694 02:47:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.694 02:47:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.694 02:47:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.953 /dev/nbd0 00:06:00.953 02:47:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.953 02:47:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.953 02:47:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:00.953 02:47:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.953 02:47:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.953 02:47:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.953 02:47:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:00.953 02:47:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.953 02:47:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.953 02:47:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.953 02:47:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.953 1+0 records in 00:06:00.953 1+0 records out 00:06:00.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191727 s, 21.4 MB/s 00:06:00.953 02:47:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.953 02:47:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.953 02:47:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.953 02:47:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.953 02:47:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.953 02:47:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.953 02:47:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.953 02:47:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.212 /dev/nbd1 00:06:01.212 02:47:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.212 02:47:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.212 02:47:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:01.212 02:47:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:01.212 02:47:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.212 02:47:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.212 02:47:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:01.212 02:47:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:01.212 02:47:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.212 02:47:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.212 02:47:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.212 1+0 records in 00:06:01.212 1+0 records out 00:06:01.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225108 s, 18.2 MB/s 00:06:01.212 02:47:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.212 02:47:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:01.212 02:47:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.212 02:47:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.212 02:47:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:01.212 02:47:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.212 02:47:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.212 02:47:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.212 02:47:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.212 02:47:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.471 02:47:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.471 { 00:06:01.471 "nbd_device": "/dev/nbd0", 00:06:01.471 "bdev_name": "Malloc0" 00:06:01.471 }, 00:06:01.471 { 00:06:01.471 "nbd_device": "/dev/nbd1", 00:06:01.471 "bdev_name": "Malloc1" 00:06:01.471 } 00:06:01.471 ]' 00:06:01.471 02:47:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.471 { 00:06:01.471 "nbd_device": "/dev/nbd0", 00:06:01.471 "bdev_name": "Malloc0" 00:06:01.471 }, 00:06:01.471 { 00:06:01.471 "nbd_device": "/dev/nbd1", 00:06:01.471 "bdev_name": "Malloc1" 00:06:01.471 } 00:06:01.471 ]' 00:06:01.471 02:47:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.471 02:47:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.471 /dev/nbd1' 00:06:01.471 02:47:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.471 02:47:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.471 /dev/nbd1' 00:06:01.471 02:47:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.471 02:47:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.471 02:47:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.471 02:47:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.471 02:47:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.471 02:47:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.471 02:47:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.471 02:47:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.471 02:47:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.471 02:47:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.471 02:47:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.471 256+0 records in 00:06:01.471 256+0 records out 00:06:01.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107083 s, 97.9 MB/s 00:06:01.471 02:47:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.471 02:47:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.471 256+0 records in 00:06:01.471 256+0 records out 00:06:01.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133649 s, 78.5 MB/s 00:06:01.471 02:47:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.471 02:47:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.471 256+0 records in 00:06:01.471 256+0 records out 00:06:01.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014802 s, 70.8 MB/s 00:06:01.471 02:47:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.472 02:47:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.472 02:47:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.472 02:47:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.472 02:47:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.472 02:47:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.472 02:47:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.472 02:47:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.472 02:47:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.472 02:47:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.472 02:47:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.472 02:47:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.731 02:47:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.731 02:47:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.731 02:47:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.731 02:47:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.731 02:47:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:01.731 02:47:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.731 02:47:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.731 02:47:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.731 02:47:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.731 02:47:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.731 02:47:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.731 02:47:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.731 02:47:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.731 02:47:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.731 02:47:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.731 02:47:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.731 02:47:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.989 02:47:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.990 02:47:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.990 02:47:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.990 02:47:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.990 02:47:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.990 02:47:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.990 02:47:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.990 02:47:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.990 02:47:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.990 02:47:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.990 02:47:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.248 02:47:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.248 02:47:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.248 02:47:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.248 02:47:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.248 02:47:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.248 02:47:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.248 02:47:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.248 02:47:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.248 02:47:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.248 02:47:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.248 02:47:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.248 02:47:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.249 02:47:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.508 02:47:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:02.767 [2024-12-14 02:47:17.658260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.767 [2024-12-14 02:47:17.678222] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.767 [2024-12-14 02:47:17.678222] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.767 [2024-12-14 02:47:17.718746] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.767 [2024-12-14 02:47:17.718784] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.054 02:47:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:06.054 02:47:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:06.054 spdk_app_start Round 2 00:06:06.054 02:47:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 112791 /var/tmp/spdk-nbd.sock 00:06:06.054 02:47:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112791 ']' 00:06:06.054 02:47:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.054 02:47:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.054 02:47:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.054 02:47:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.054 02:47:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.054 02:47:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.054 02:47:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:06.054 02:47:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.054 Malloc0 00:06:06.054 02:47:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.054 Malloc1 00:06:06.054 02:47:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.054 02:47:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.054 02:47:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.054 02:47:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.054 02:47:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.054 02:47:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.054 02:47:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.054 02:47:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.054 02:47:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.054 02:47:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.054 02:47:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.054 02:47:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.054 02:47:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:06.054 02:47:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.054 02:47:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.054 02:47:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:06.313 /dev/nbd0 00:06:06.313 02:47:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:06.313 02:47:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:06.313 02:47:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:06.313 02:47:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:06.313 02:47:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.313 02:47:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.313 02:47:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:06.313 02:47:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:06.313 02:47:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.313 02:47:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.313 02:47:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.313 1+0 records in 00:06:06.313 1+0 records out 00:06:06.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207561 s, 19.7 MB/s 00:06:06.313 02:47:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.313 02:47:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:06.313 02:47:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.313 02:47:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.313 02:47:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:06.313 02:47:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.313 02:47:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.313 02:47:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:06.572 /dev/nbd1 00:06:06.572 02:47:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.572 02:47:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.572 02:47:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:06.572 02:47:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:06.572 02:47:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.572 02:47:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.572 02:47:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:06.572 02:47:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:06.572 02:47:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.572 02:47:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.572 02:47:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.572 1+0 records in 00:06:06.572 1+0 records out 00:06:06.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193672 s, 21.1 MB/s 00:06:06.572 02:47:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.572 02:47:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:06.572 02:47:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.572 02:47:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.572 02:47:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:06.572 02:47:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.572 02:47:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.572 02:47:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.572 02:47:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.572 02:47:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.831 { 00:06:06.831 "nbd_device": "/dev/nbd0", 00:06:06.831 "bdev_name": "Malloc0" 00:06:06.831 }, 00:06:06.831 { 00:06:06.831 "nbd_device": "/dev/nbd1", 00:06:06.831 "bdev_name": "Malloc1" 00:06:06.831 } 00:06:06.831 ]' 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.831 { 00:06:06.831 "nbd_device": "/dev/nbd0", 00:06:06.831 "bdev_name": "Malloc0" 00:06:06.831 }, 00:06:06.831 { 00:06:06.831 "nbd_device": "/dev/nbd1", 00:06:06.831 "bdev_name": "Malloc1" 00:06:06.831 } 00:06:06.831 ]' 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.831 /dev/nbd1' 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.831 /dev/nbd1' 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.831 256+0 records in 00:06:06.831 256+0 records out 00:06:06.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103033 s, 102 MB/s 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.831 256+0 records in 00:06:06.831 256+0 records out 00:06:06.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014022 s, 74.8 MB/s 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.831 256+0 records in 00:06:06.831 256+0 records out 00:06:06.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151882 s, 69.0 MB/s 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.831 02:47:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.832 02:47:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.832 02:47:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.832 02:47:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.832 02:47:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.832 02:47:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.832 02:47:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.832 02:47:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.832 02:47:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.832 02:47:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.832 02:47:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.832 02:47:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:06.832 02:47:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.832 02:47:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.090 02:47:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.090 02:47:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.090 02:47:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.090 02:47:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.090 02:47:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.091 02:47:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.091 02:47:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.091 02:47:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.091 02:47:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.091 02:47:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:07.349 02:47:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:07.349 02:47:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:07.349 02:47:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:07.349 02:47:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.350 02:47:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.350 02:47:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:07.350 02:47:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.350 02:47:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.350 02:47:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.350 02:47:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.350 02:47:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.608 02:47:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:07.608 02:47:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:07.608 02:47:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.608 02:47:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:07.608 02:47:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:07.608 02:47:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.608 02:47:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:07.608 02:47:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:07.608 02:47:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:07.608 02:47:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:07.608 02:47:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:07.608 02:47:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:07.608 02:47:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.868 02:47:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:07.868 [2024-12-14 02:47:22.957922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.868 [2024-12-14 02:47:22.977693] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.868 [2024-12-14 02:47:22.977693] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.127 [2024-12-14 02:47:23.018232] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:08.127 [2024-12-14 02:47:23.018266] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.415 02:47:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 112791 /var/tmp/spdk-nbd.sock 00:06:11.415 02:47:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112791 ']' 00:06:11.415 02:47:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.415 02:47:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.415 02:47:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.415 02:47:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.415 02:47:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.415 02:47:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.415 02:47:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:11.415 02:47:26 event.app_repeat -- event/event.sh@39 -- # killprocess 112791 00:06:11.415 02:47:26 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 112791 ']' 00:06:11.415 02:47:26 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 112791 00:06:11.415 02:47:26 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:11.415 02:47:26 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.415 02:47:26 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112791 00:06:11.415 02:47:26 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.415 02:47:26 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.415 02:47:26 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112791' 00:06:11.415 killing process with pid 112791 00:06:11.415 02:47:26 event.app_repeat -- common/autotest_common.sh@973 -- # kill 112791 00:06:11.415 02:47:26 event.app_repeat -- common/autotest_common.sh@978 -- # wait 112791 00:06:11.415 spdk_app_start is called in Round 0. 00:06:11.415 Shutdown signal received, stop current app iteration 00:06:11.415 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:06:11.415 spdk_app_start is called in Round 1. 00:06:11.415 Shutdown signal received, stop current app iteration 00:06:11.415 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:06:11.415 spdk_app_start is called in Round 2. 00:06:11.415 Shutdown signal received, stop current app iteration 00:06:11.415 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:06:11.415 spdk_app_start is called in Round 3. 00:06:11.415 Shutdown signal received, stop current app iteration 00:06:11.415 02:47:26 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:11.415 02:47:26 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:11.415 00:06:11.415 real 0m16.439s 00:06:11.415 user 0m36.263s 00:06:11.415 sys 0m2.547s 00:06:11.415 02:47:26 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.415 02:47:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.415 ************************************ 00:06:11.415 END TEST app_repeat 00:06:11.415 ************************************ 00:06:11.415 02:47:26 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:11.415 02:47:26 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:11.415 02:47:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.415 02:47:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.415 02:47:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.415 ************************************ 00:06:11.415 START TEST cpu_locks 00:06:11.415 ************************************ 00:06:11.415 02:47:26 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:11.415 * Looking for test storage... 00:06:11.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:11.415 02:47:26 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:11.415 02:47:26 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:11.415 02:47:26 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:11.415 02:47:26 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:11.415 02:47:26 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.415 02:47:26 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.415 02:47:26 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.416 02:47:26 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:11.416 02:47:26 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.416 02:47:26 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:11.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.416 --rc genhtml_branch_coverage=1 00:06:11.416 --rc genhtml_function_coverage=1 00:06:11.416 --rc genhtml_legend=1 00:06:11.416 --rc geninfo_all_blocks=1 00:06:11.416 --rc geninfo_unexecuted_blocks=1 00:06:11.416 00:06:11.416 ' 00:06:11.416 02:47:26 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:11.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.416 --rc genhtml_branch_coverage=1 00:06:11.416 --rc genhtml_function_coverage=1 00:06:11.416 --rc genhtml_legend=1 00:06:11.416 --rc geninfo_all_blocks=1 00:06:11.416 --rc geninfo_unexecuted_blocks=1 00:06:11.416 00:06:11.416 ' 00:06:11.416 02:47:26 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:11.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.416 --rc genhtml_branch_coverage=1 00:06:11.416 --rc genhtml_function_coverage=1 00:06:11.416 --rc genhtml_legend=1 00:06:11.416 --rc geninfo_all_blocks=1 00:06:11.416 --rc geninfo_unexecuted_blocks=1 00:06:11.416 00:06:11.416 ' 00:06:11.416 02:47:26 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:11.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.416 --rc genhtml_branch_coverage=1 00:06:11.416 --rc genhtml_function_coverage=1 00:06:11.416 --rc genhtml_legend=1 00:06:11.416 --rc geninfo_all_blocks=1 00:06:11.416 --rc geninfo_unexecuted_blocks=1 00:06:11.416 00:06:11.416 ' 00:06:11.416 02:47:26 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:11.416 02:47:26 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:11.416 02:47:26 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:11.416 02:47:26 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:11.416 02:47:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.416 02:47:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.416 02:47:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.416 ************************************ 00:06:11.416 START TEST default_locks 00:06:11.416 ************************************ 00:06:11.416 02:47:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:11.416 02:47:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=115724 00:06:11.416 02:47:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 115724 00:06:11.416 02:47:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.416 02:47:26 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 115724 ']' 00:06:11.416 02:47:26 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.416 02:47:26 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.416 02:47:26 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.416 02:47:26 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.416 02:47:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.416 [2024-12-14 02:47:26.546659] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:11.416 [2024-12-14 02:47:26.546701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115724 ] 00:06:11.675 [2024-12-14 02:47:26.624308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.675 [2024-12-14 02:47:26.646197] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.934 02:47:26 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.934 02:47:26 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:11.934 02:47:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 115724 00:06:11.934 02:47:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 115724 00:06:11.934 02:47:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.502 lslocks: write error 00:06:12.502 02:47:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 115724 00:06:12.502 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 115724 ']' 00:06:12.502 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 115724 00:06:12.502 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:12.502 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.502 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115724 00:06:12.502 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.502 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.502 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115724' 00:06:12.502 killing process with pid 115724 00:06:12.502 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 115724 00:06:12.502 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 115724 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 115724 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 115724 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 115724 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 115724 ']' 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (115724) - No such process 00:06:12.762 ERROR: process (pid: 115724) is no longer running 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:12.762 00:06:12.762 real 0m1.210s 00:06:12.762 user 0m1.169s 00:06:12.762 sys 0m0.559s 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.762 02:47:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.762 ************************************ 00:06:12.762 END TEST default_locks 00:06:12.762 ************************************ 00:06:12.762 02:47:27 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:12.762 02:47:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.762 02:47:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.762 02:47:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.762 ************************************ 00:06:12.762 START TEST default_locks_via_rpc 00:06:12.762 ************************************ 00:06:12.762 02:47:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:12.762 02:47:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=115974 00:06:12.762 02:47:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 115974 00:06:12.762 02:47:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.762 02:47:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 115974 ']' 00:06:12.762 02:47:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.762 02:47:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.762 02:47:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.762 02:47:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.762 02:47:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.762 [2024-12-14 02:47:27.830692] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:12.762 [2024-12-14 02:47:27.830738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115974 ] 00:06:13.022 [2024-12-14 02:47:27.907443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.022 [2024-12-14 02:47:27.927691] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.022 02:47:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.022 02:47:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.022 02:47:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:13.022 02:47:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.022 02:47:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.022 02:47:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.022 02:47:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:13.022 02:47:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:13.022 02:47:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:13.022 02:47:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:13.022 02:47:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:13.022 02:47:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.022 02:47:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.022 02:47:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.022 02:47:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 115974 00:06:13.022 02:47:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 115974 00:06:13.022 02:47:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.589 02:47:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 115974 00:06:13.589 02:47:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 115974 ']' 00:06:13.589 02:47:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 115974 00:06:13.589 02:47:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:13.589 02:47:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.589 02:47:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115974 00:06:13.589 02:47:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.590 02:47:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.590 02:47:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115974' 00:06:13.590 killing process with pid 115974 00:06:13.590 02:47:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 115974 00:06:13.590 02:47:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 115974 00:06:13.849 00:06:13.849 real 0m1.140s 00:06:13.849 user 0m1.108s 00:06:13.849 sys 0m0.527s 00:06:13.849 02:47:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.849 02:47:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.849 ************************************ 00:06:13.849 END TEST default_locks_via_rpc 00:06:13.849 ************************************ 00:06:13.849 02:47:28 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:13.849 02:47:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.849 02:47:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.849 02:47:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.109 ************************************ 00:06:14.109 START TEST non_locking_app_on_locked_coremask 00:06:14.109 ************************************ 00:06:14.109 02:47:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:14.109 02:47:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=116222 00:06:14.109 02:47:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.109 02:47:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 116222 /var/tmp/spdk.sock 00:06:14.109 02:47:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 116222 ']' 00:06:14.109 02:47:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.109 02:47:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.109 02:47:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.109 02:47:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.109 02:47:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.109 [2024-12-14 02:47:29.042071] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:14.109 [2024-12-14 02:47:29.042112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116222 ] 00:06:14.109 [2024-12-14 02:47:29.118086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.109 [2024-12-14 02:47:29.140863] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.368 02:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.368 02:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:14.368 02:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=116278 00:06:14.368 02:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 116278 /var/tmp/spdk2.sock 00:06:14.368 02:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:14.368 02:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 116278 ']' 00:06:14.368 02:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.368 02:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.368 02:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.368 02:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.368 02:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.368 [2024-12-14 02:47:29.394408] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:14.368 [2024-12-14 02:47:29.394456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116278 ] 00:06:14.368 [2024-12-14 02:47:29.480566] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.368 [2024-12-14 02:47:29.480592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.627 [2024-12-14 02:47:29.528329] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.195 02:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.195 02:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:15.195 02:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 116222 00:06:15.195 02:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 116222 00:06:15.195 02:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.763 lslocks: write error 00:06:15.763 02:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 116222 00:06:15.763 02:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 116222 ']' 00:06:15.763 02:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 116222 00:06:15.763 02:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:15.763 02:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.763 02:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116222 00:06:15.763 02:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.763 02:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.763 02:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116222' 00:06:15.763 killing process with pid 116222 00:06:15.763 02:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 116222 00:06:15.763 02:47:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 116222 00:06:16.331 02:47:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 116278 00:06:16.331 02:47:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 116278 ']' 00:06:16.331 02:47:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 116278 00:06:16.331 02:47:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:16.331 02:47:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.331 02:47:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116278 00:06:16.331 02:47:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.331 02:47:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.331 02:47:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116278' 00:06:16.331 killing process with pid 116278 00:06:16.331 02:47:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 116278 00:06:16.331 02:47:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 116278 00:06:16.591 00:06:16.591 real 0m2.700s 00:06:16.591 user 0m2.853s 00:06:16.591 sys 0m0.920s 00:06:16.591 02:47:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.591 02:47:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.591 ************************************ 00:06:16.591 END TEST non_locking_app_on_locked_coremask 00:06:16.591 ************************************ 00:06:16.591 02:47:31 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:16.591 02:47:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.591 02:47:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.591 02:47:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.850 ************************************ 00:06:16.850 START TEST locking_app_on_unlocked_coremask 00:06:16.850 ************************************ 00:06:16.850 02:47:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:16.850 02:47:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=116715 00:06:16.850 02:47:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 116715 /var/tmp/spdk.sock 00:06:16.850 02:47:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:16.850 02:47:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 116715 ']' 00:06:16.850 02:47:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.850 02:47:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.850 02:47:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.850 02:47:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.850 02:47:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.850 [2024-12-14 02:47:31.808724] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:16.850 [2024-12-14 02:47:31.808766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116715 ] 00:06:16.850 [2024-12-14 02:47:31.883476] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.850 [2024-12-14 02:47:31.883500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.850 [2024-12-14 02:47:31.903511] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.110 02:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.110 02:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:17.110 02:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=116825 00:06:17.110 02:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 116825 /var/tmp/spdk2.sock 00:06:17.110 02:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:17.110 02:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 116825 ']' 00:06:17.110 02:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.110 02:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.110 02:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.110 02:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.110 02:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.110 [2024-12-14 02:47:32.171411] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:17.110 [2024-12-14 02:47:32.171463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116825 ] 00:06:17.369 [2024-12-14 02:47:32.263029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.369 [2024-12-14 02:47:32.305127] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.938 02:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.938 02:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:17.938 02:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 116825 00:06:17.938 02:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 116825 00:06:17.938 02:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.506 lslocks: write error 00:06:18.506 02:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 116715 00:06:18.506 02:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 116715 ']' 00:06:18.506 02:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 116715 00:06:18.506 02:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:18.506 02:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.506 02:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116715 00:06:18.506 02:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.506 02:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.506 02:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116715' 00:06:18.506 killing process with pid 116715 00:06:18.506 02:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 116715 00:06:18.506 02:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 116715 00:06:19.075 02:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 116825 00:06:19.075 02:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 116825 ']' 00:06:19.075 02:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 116825 00:06:19.075 02:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:19.075 02:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.075 02:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116825 00:06:19.075 02:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.075 02:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.075 02:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116825' 00:06:19.075 killing process with pid 116825 00:06:19.075 02:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 116825 00:06:19.075 02:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 116825 00:06:19.334 00:06:19.334 real 0m2.696s 00:06:19.334 user 0m2.851s 00:06:19.334 sys 0m0.913s 00:06:19.334 02:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.334 02:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.334 ************************************ 00:06:19.334 END TEST locking_app_on_unlocked_coremask 00:06:19.334 ************************************ 00:06:19.594 02:47:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:19.594 02:47:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.594 02:47:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.594 02:47:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.594 ************************************ 00:06:19.594 START TEST locking_app_on_locked_coremask 00:06:19.594 ************************************ 00:06:19.594 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:19.594 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=117199 00:06:19.594 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 117199 /var/tmp/spdk.sock 00:06:19.594 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.594 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 117199 ']' 00:06:19.594 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.594 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.594 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.594 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.594 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.594 [2024-12-14 02:47:34.579022] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:19.594 [2024-12-14 02:47:34.579065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117199 ] 00:06:19.594 [2024-12-14 02:47:34.654708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.594 [2024-12-14 02:47:34.677630] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.853 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.853 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:19.853 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=117340 00:06:19.853 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 117340 /var/tmp/spdk2.sock 00:06:19.853 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:19.853 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:19.853 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 117340 /var/tmp/spdk2.sock 00:06:19.853 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:19.853 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.853 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:19.853 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.853 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 117340 /var/tmp/spdk2.sock 00:06:19.853 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 117340 ']' 00:06:19.853 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.853 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.853 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.853 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.853 02:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.853 [2024-12-14 02:47:34.930462] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:19.853 [2024-12-14 02:47:34.930510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117340 ] 00:06:20.112 [2024-12-14 02:47:35.017449] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 117199 has claimed it. 00:06:20.112 [2024-12-14 02:47:35.017482] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (117340) - No such process 00:06:20.679 ERROR: process (pid: 117340) is no longer running 00:06:20.679 02:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.679 02:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:20.679 02:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:20.679 02:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:20.679 02:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:20.679 02:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:20.679 02:47:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 117199 00:06:20.679 02:47:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 117199 00:06:20.679 02:47:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.938 lslocks: write error 00:06:20.938 02:47:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 117199 00:06:20.938 02:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 117199 ']' 00:06:20.938 02:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 117199 00:06:20.938 02:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:20.938 02:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.938 02:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117199 00:06:20.938 02:47:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.938 02:47:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.938 02:47:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117199' 00:06:20.938 killing process with pid 117199 00:06:20.938 02:47:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 117199 00:06:20.938 02:47:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 117199 00:06:21.197 00:06:21.197 real 0m1.769s 00:06:21.198 user 0m1.893s 00:06:21.198 sys 0m0.607s 00:06:21.198 02:47:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.198 02:47:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.198 ************************************ 00:06:21.198 END TEST locking_app_on_locked_coremask 00:06:21.198 ************************************ 00:06:21.457 02:47:36 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:21.457 02:47:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.457 02:47:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.457 02:47:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.457 ************************************ 00:06:21.457 START TEST locking_overlapped_coremask 00:06:21.457 ************************************ 00:06:21.457 02:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:21.457 02:47:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=117668 00:06:21.457 02:47:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 117668 /var/tmp/spdk.sock 00:06:21.457 02:47:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:21.457 02:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 117668 ']' 00:06:21.457 02:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.457 02:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.457 02:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.457 02:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.457 02:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.457 [2024-12-14 02:47:36.415121] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:21.457 [2024-12-14 02:47:36.415159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117668 ] 00:06:21.457 [2024-12-14 02:47:36.489615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.457 [2024-12-14 02:47:36.514558] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.457 [2024-12-14 02:47:36.514670] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.457 [2024-12-14 02:47:36.514671] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.716 02:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.716 02:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:21.716 02:47:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=117679 00:06:21.716 02:47:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 117679 /var/tmp/spdk2.sock 00:06:21.716 02:47:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:21.716 02:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:21.716 02:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 117679 /var/tmp/spdk2.sock 00:06:21.716 02:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:21.716 02:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.716 02:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:21.716 02:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.716 02:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 117679 /var/tmp/spdk2.sock 00:06:21.716 02:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 117679 ']' 00:06:21.716 02:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.717 02:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.717 02:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.717 02:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.717 02:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.717 [2024-12-14 02:47:36.776050] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:21.717 [2024-12-14 02:47:36.776097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117679 ] 00:06:21.975 [2024-12-14 02:47:36.866390] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 117668 has claimed it. 00:06:21.975 [2024-12-14 02:47:36.866424] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:22.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (117679) - No such process 00:06:22.544 ERROR: process (pid: 117679) is no longer running 00:06:22.544 02:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.544 02:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:22.544 02:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:22.544 02:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:22.544 02:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:22.544 02:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:22.544 02:47:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:22.544 02:47:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:22.544 02:47:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:22.544 02:47:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:22.544 02:47:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 117668 00:06:22.544 02:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 117668 ']' 00:06:22.544 02:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 117668 00:06:22.544 02:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:22.544 02:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.544 02:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117668 00:06:22.544 02:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.544 02:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.544 02:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117668' 00:06:22.544 killing process with pid 117668 00:06:22.544 02:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 117668 00:06:22.544 02:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 117668 00:06:22.803 00:06:22.803 real 0m1.392s 00:06:22.803 user 0m3.873s 00:06:22.803 sys 0m0.397s 00:06:22.803 02:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.803 02:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.803 ************************************ 00:06:22.803 END TEST locking_overlapped_coremask 00:06:22.803 ************************************ 00:06:22.803 02:47:37 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:22.803 02:47:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.803 02:47:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.803 02:47:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.803 ************************************ 00:06:22.803 START TEST locking_overlapped_coremask_via_rpc 00:06:22.803 ************************************ 00:06:22.803 02:47:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:22.803 02:47:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=117929 00:06:22.803 02:47:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 117929 /var/tmp/spdk.sock 00:06:22.803 02:47:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:22.803 02:47:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 117929 ']' 00:06:22.803 02:47:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.803 02:47:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.803 02:47:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.803 02:47:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.803 02:47:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.803 [2024-12-14 02:47:37.875836] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:22.803 [2024-12-14 02:47:37.875880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117929 ] 00:06:23.063 [2024-12-14 02:47:37.949103] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.063 [2024-12-14 02:47:37.949125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.063 [2024-12-14 02:47:37.971575] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.063 [2024-12-14 02:47:37.971684] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.063 [2024-12-14 02:47:37.971686] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.063 02:47:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.063 02:47:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:23.063 02:47:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=117936 00:06:23.063 02:47:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 117936 /var/tmp/spdk2.sock 00:06:23.063 02:47:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:23.063 02:47:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 117936 ']' 00:06:23.063 02:47:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.063 02:47:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.063 02:47:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.063 02:47:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.063 02:47:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.322 [2024-12-14 02:47:38.222000] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:23.322 [2024-12-14 02:47:38.222045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117936 ] 00:06:23.322 [2024-12-14 02:47:38.311414] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.322 [2024-12-14 02:47:38.311443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.322 [2024-12-14 02:47:38.360097] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.322 [2024-12-14 02:47:38.363357] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.322 [2024-12-14 02:47:38.363359] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:24.258 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.258 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:24.258 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:24.258 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.258 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.258 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.258 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.258 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:24.258 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.258 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:24.258 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.258 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:24.258 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.258 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.258 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.258 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.258 [2024-12-14 02:47:39.072384] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 117929 has claimed it. 00:06:24.258 request: 00:06:24.258 { 00:06:24.258 "method": "framework_enable_cpumask_locks", 00:06:24.258 "req_id": 1 00:06:24.258 } 00:06:24.258 Got JSON-RPC error response 00:06:24.258 response: 00:06:24.258 { 00:06:24.258 "code": -32603, 00:06:24.259 "message": "Failed to claim CPU core: 2" 00:06:24.259 } 00:06:24.259 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:24.259 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:24.259 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:24.259 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:24.259 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:24.259 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 117929 /var/tmp/spdk.sock 00:06:24.259 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 117929 ']' 00:06:24.259 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.259 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.259 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.259 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.259 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.259 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.259 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:24.259 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 117936 /var/tmp/spdk2.sock 00:06:24.259 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 117936 ']' 00:06:24.259 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.259 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.259 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.259 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.259 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.518 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.518 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:24.518 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:24.518 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:24.518 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:24.518 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:24.518 00:06:24.518 real 0m1.664s 00:06:24.518 user 0m0.810s 00:06:24.518 sys 0m0.147s 00:06:24.518 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.518 02:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.518 ************************************ 00:06:24.518 END TEST locking_overlapped_coremask_via_rpc 00:06:24.518 ************************************ 00:06:24.518 02:47:39 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:24.518 02:47:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 117929 ]] 00:06:24.518 02:47:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 117929 00:06:24.518 02:47:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 117929 ']' 00:06:24.518 02:47:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 117929 00:06:24.518 02:47:39 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:24.518 02:47:39 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.518 02:47:39 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117929 00:06:24.519 02:47:39 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.519 02:47:39 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.519 02:47:39 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117929' 00:06:24.519 killing process with pid 117929 00:06:24.519 02:47:39 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 117929 00:06:24.519 02:47:39 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 117929 00:06:24.778 02:47:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 117936 ]] 00:06:24.778 02:47:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 117936 00:06:24.778 02:47:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 117936 ']' 00:06:24.778 02:47:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 117936 00:06:24.778 02:47:39 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:24.778 02:47:39 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.778 02:47:39 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117936 00:06:25.036 02:47:39 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:25.037 02:47:39 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:25.037 02:47:39 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117936' 00:06:25.037 killing process with pid 117936 00:06:25.037 02:47:39 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 117936 00:06:25.037 02:47:39 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 117936 00:06:25.296 02:47:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:25.296 02:47:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:25.296 02:47:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 117929 ]] 00:06:25.296 02:47:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 117929 00:06:25.296 02:47:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 117929 ']' 00:06:25.296 02:47:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 117929 00:06:25.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (117929) - No such process 00:06:25.296 02:47:40 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 117929 is not found' 00:06:25.296 Process with pid 117929 is not found 00:06:25.296 02:47:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 117936 ]] 00:06:25.296 02:47:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 117936 00:06:25.296 02:47:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 117936 ']' 00:06:25.296 02:47:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 117936 00:06:25.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (117936) - No such process 00:06:25.296 02:47:40 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 117936 is not found' 00:06:25.296 Process with pid 117936 is not found 00:06:25.296 02:47:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:25.296 00:06:25.296 real 0m13.960s 00:06:25.296 user 0m24.296s 00:06:25.296 sys 0m5.016s 00:06:25.296 02:47:40 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.296 02:47:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.296 ************************************ 00:06:25.296 END TEST cpu_locks 00:06:25.296 ************************************ 00:06:25.296 00:06:25.296 real 0m38.611s 00:06:25.296 user 1m13.679s 00:06:25.296 sys 0m8.581s 00:06:25.296 02:47:40 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.296 02:47:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.296 ************************************ 00:06:25.296 END TEST event 00:06:25.296 ************************************ 00:06:25.296 02:47:40 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:25.296 02:47:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.296 02:47:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.296 02:47:40 -- common/autotest_common.sh@10 -- # set +x 00:06:25.296 ************************************ 00:06:25.296 START TEST thread 00:06:25.296 ************************************ 00:06:25.296 02:47:40 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:25.296 * Looking for test storage... 00:06:25.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:25.555 02:47:40 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:25.555 02:47:40 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:25.555 02:47:40 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:25.555 02:47:40 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:25.555 02:47:40 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.555 02:47:40 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.555 02:47:40 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.555 02:47:40 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.555 02:47:40 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.555 02:47:40 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.555 02:47:40 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.555 02:47:40 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.555 02:47:40 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.555 02:47:40 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.555 02:47:40 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.555 02:47:40 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:25.555 02:47:40 thread -- scripts/common.sh@345 -- # : 1 00:06:25.555 02:47:40 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.555 02:47:40 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.555 02:47:40 thread -- scripts/common.sh@365 -- # decimal 1 00:06:25.555 02:47:40 thread -- scripts/common.sh@353 -- # local d=1 00:06:25.555 02:47:40 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.555 02:47:40 thread -- scripts/common.sh@355 -- # echo 1 00:06:25.555 02:47:40 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.555 02:47:40 thread -- scripts/common.sh@366 -- # decimal 2 00:06:25.555 02:47:40 thread -- scripts/common.sh@353 -- # local d=2 00:06:25.555 02:47:40 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.555 02:47:40 thread -- scripts/common.sh@355 -- # echo 2 00:06:25.555 02:47:40 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.555 02:47:40 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.555 02:47:40 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.555 02:47:40 thread -- scripts/common.sh@368 -- # return 0 00:06:25.555 02:47:40 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.555 02:47:40 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:25.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.555 --rc genhtml_branch_coverage=1 00:06:25.555 --rc genhtml_function_coverage=1 00:06:25.555 --rc genhtml_legend=1 00:06:25.555 --rc geninfo_all_blocks=1 00:06:25.555 --rc geninfo_unexecuted_blocks=1 00:06:25.555 00:06:25.555 ' 00:06:25.555 02:47:40 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:25.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.555 --rc genhtml_branch_coverage=1 00:06:25.555 --rc genhtml_function_coverage=1 00:06:25.555 --rc genhtml_legend=1 00:06:25.555 --rc geninfo_all_blocks=1 00:06:25.555 --rc geninfo_unexecuted_blocks=1 00:06:25.555 00:06:25.555 ' 00:06:25.555 02:47:40 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:25.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.555 --rc genhtml_branch_coverage=1 00:06:25.555 --rc genhtml_function_coverage=1 00:06:25.555 --rc genhtml_legend=1 00:06:25.555 --rc geninfo_all_blocks=1 00:06:25.555 --rc geninfo_unexecuted_blocks=1 00:06:25.555 00:06:25.555 ' 00:06:25.555 02:47:40 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:25.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.555 --rc genhtml_branch_coverage=1 00:06:25.555 --rc genhtml_function_coverage=1 00:06:25.555 --rc genhtml_legend=1 00:06:25.555 --rc geninfo_all_blocks=1 00:06:25.555 --rc geninfo_unexecuted_blocks=1 00:06:25.555 00:06:25.555 ' 00:06:25.555 02:47:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:25.555 02:47:40 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:25.555 02:47:40 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.555 02:47:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.555 ************************************ 00:06:25.555 START TEST thread_poller_perf 00:06:25.555 ************************************ 00:06:25.555 02:47:40 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:25.555 [2024-12-14 02:47:40.572502] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:25.555 [2024-12-14 02:47:40.572560] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118485 ] 00:06:25.555 [2024-12-14 02:47:40.650478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.555 [2024-12-14 02:47:40.672759] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.555 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:26.929 [2024-12-14T01:47:42.062Z] ====================================== 00:06:26.929 [2024-12-14T01:47:42.062Z] busy:2109375428 (cyc) 00:06:26.929 [2024-12-14T01:47:42.062Z] total_run_count: 417000 00:06:26.929 [2024-12-14T01:47:42.062Z] tsc_hz: 2100000000 (cyc) 00:06:26.929 [2024-12-14T01:47:42.062Z] ====================================== 00:06:26.929 [2024-12-14T01:47:42.062Z] poller_cost: 5058 (cyc), 2408 (nsec) 00:06:26.929 00:06:26.929 real 0m1.160s 00:06:26.929 user 0m1.078s 00:06:26.929 sys 0m0.079s 00:06:26.929 02:47:41 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.929 02:47:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.929 ************************************ 00:06:26.929 END TEST thread_poller_perf 00:06:26.929 ************************************ 00:06:26.929 02:47:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.929 02:47:41 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:26.929 02:47:41 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.929 02:47:41 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.929 ************************************ 00:06:26.929 START TEST thread_poller_perf 00:06:26.929 ************************************ 00:06:26.929 02:47:41 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.929 [2024-12-14 02:47:41.806526] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:26.930 [2024-12-14 02:47:41.806610] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118733 ] 00:06:26.930 [2024-12-14 02:47:41.884139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.930 [2024-12-14 02:47:41.908298] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.930 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:27.865 [2024-12-14T01:47:42.998Z] ====================================== 00:06:27.865 [2024-12-14T01:47:42.998Z] busy:2101401824 (cyc) 00:06:27.865 [2024-12-14T01:47:42.998Z] total_run_count: 5113000 00:06:27.865 [2024-12-14T01:47:42.998Z] tsc_hz: 2100000000 (cyc) 00:06:27.865 [2024-12-14T01:47:42.998Z] ====================================== 00:06:27.865 [2024-12-14T01:47:42.998Z] poller_cost: 410 (cyc), 195 (nsec) 00:06:27.865 00:06:27.865 real 0m1.156s 00:06:27.865 user 0m1.078s 00:06:27.865 sys 0m0.075s 00:06:27.865 02:47:42 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.865 02:47:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.865 ************************************ 00:06:27.865 END TEST thread_poller_perf 00:06:27.865 ************************************ 00:06:27.865 02:47:42 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:27.865 00:06:27.865 real 0m2.631s 00:06:27.865 user 0m2.315s 00:06:27.865 sys 0m0.331s 00:06:27.865 02:47:42 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.865 02:47:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.865 ************************************ 00:06:27.865 END TEST thread 00:06:27.865 ************************************ 00:06:28.125 02:47:43 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:28.125 02:47:43 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:28.125 02:47:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.125 02:47:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.125 02:47:43 -- common/autotest_common.sh@10 -- # set +x 00:06:28.125 ************************************ 00:06:28.125 START TEST app_cmdline 00:06:28.125 ************************************ 00:06:28.125 02:47:43 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:28.125 * Looking for test storage... 00:06:28.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:28.125 02:47:43 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:28.125 02:47:43 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:28.125 02:47:43 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:28.125 02:47:43 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.125 02:47:43 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:28.125 02:47:43 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.125 02:47:43 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:28.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.125 --rc genhtml_branch_coverage=1 00:06:28.125 --rc genhtml_function_coverage=1 00:06:28.125 --rc genhtml_legend=1 00:06:28.125 --rc geninfo_all_blocks=1 00:06:28.125 --rc geninfo_unexecuted_blocks=1 00:06:28.125 00:06:28.125 ' 00:06:28.125 02:47:43 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:28.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.125 --rc genhtml_branch_coverage=1 00:06:28.125 --rc genhtml_function_coverage=1 00:06:28.125 --rc genhtml_legend=1 00:06:28.125 --rc geninfo_all_blocks=1 00:06:28.125 --rc geninfo_unexecuted_blocks=1 00:06:28.125 00:06:28.125 ' 00:06:28.125 02:47:43 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:28.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.125 --rc genhtml_branch_coverage=1 00:06:28.125 --rc genhtml_function_coverage=1 00:06:28.125 --rc genhtml_legend=1 00:06:28.125 --rc geninfo_all_blocks=1 00:06:28.125 --rc geninfo_unexecuted_blocks=1 00:06:28.125 00:06:28.125 ' 00:06:28.125 02:47:43 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:28.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.125 --rc genhtml_branch_coverage=1 00:06:28.125 --rc genhtml_function_coverage=1 00:06:28.125 --rc genhtml_legend=1 00:06:28.125 --rc geninfo_all_blocks=1 00:06:28.125 --rc geninfo_unexecuted_blocks=1 00:06:28.125 00:06:28.125 ' 00:06:28.125 02:47:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:28.125 02:47:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=119025 00:06:28.125 02:47:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 119025 00:06:28.125 02:47:43 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:28.125 02:47:43 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 119025 ']' 00:06:28.125 02:47:43 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.125 02:47:43 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.125 02:47:43 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.126 02:47:43 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.126 02:47:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.385 [2024-12-14 02:47:43.280542] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:28.385 [2024-12-14 02:47:43.280590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119025 ] 00:06:28.385 [2024-12-14 02:47:43.353614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.385 [2024-12-14 02:47:43.375265] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.644 02:47:43 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.644 02:47:43 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:28.644 02:47:43 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:28.644 { 00:06:28.644 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:06:28.644 "fields": { 00:06:28.644 "major": 25, 00:06:28.644 "minor": 1, 00:06:28.644 "patch": 0, 00:06:28.644 "suffix": "-pre", 00:06:28.644 "commit": "e01cb43b8" 00:06:28.644 } 00:06:28.644 } 00:06:28.644 02:47:43 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:28.644 02:47:43 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:28.644 02:47:43 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:28.644 02:47:43 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:28.644 02:47:43 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:28.644 02:47:43 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:28.644 02:47:43 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:28.644 02:47:43 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.644 02:47:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.904 02:47:43 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.904 02:47:43 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:28.904 02:47:43 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:28.904 02:47:43 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.904 02:47:43 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:28.904 02:47:43 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.904 02:47:43 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:28.904 02:47:43 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.904 02:47:43 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:28.904 02:47:43 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.904 02:47:43 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:28.904 02:47:43 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.904 02:47:43 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:28.904 02:47:43 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:28.904 02:47:43 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.904 request: 00:06:28.904 { 00:06:28.904 "method": "env_dpdk_get_mem_stats", 00:06:28.904 "req_id": 1 00:06:28.904 } 00:06:28.904 Got JSON-RPC error response 00:06:28.904 response: 00:06:28.904 { 00:06:28.904 "code": -32601, 00:06:28.904 "message": "Method not found" 00:06:28.904 } 00:06:28.904 02:47:44 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:28.904 02:47:44 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.904 02:47:44 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:28.904 02:47:44 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.904 02:47:44 app_cmdline -- app/cmdline.sh@1 -- # killprocess 119025 00:06:28.904 02:47:44 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 119025 ']' 00:06:28.904 02:47:44 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 119025 00:06:28.904 02:47:44 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:28.904 02:47:44 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.904 02:47:44 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 119025 00:06:29.164 02:47:44 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.164 02:47:44 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.164 02:47:44 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 119025' 00:06:29.164 killing process with pid 119025 00:06:29.164 02:47:44 app_cmdline -- common/autotest_common.sh@973 -- # kill 119025 00:06:29.164 02:47:44 app_cmdline -- common/autotest_common.sh@978 -- # wait 119025 00:06:29.423 00:06:29.423 real 0m1.303s 00:06:29.423 user 0m1.517s 00:06:29.423 sys 0m0.450s 00:06:29.423 02:47:44 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.423 02:47:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:29.423 ************************************ 00:06:29.423 END TEST app_cmdline 00:06:29.423 ************************************ 00:06:29.423 02:47:44 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:29.423 02:47:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.423 02:47:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.423 02:47:44 -- common/autotest_common.sh@10 -- # set +x 00:06:29.423 ************************************ 00:06:29.423 START TEST version 00:06:29.423 ************************************ 00:06:29.423 02:47:44 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:29.423 * Looking for test storage... 00:06:29.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:29.423 02:47:44 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:29.423 02:47:44 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:29.423 02:47:44 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:29.684 02:47:44 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:29.684 02:47:44 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.684 02:47:44 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.684 02:47:44 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.684 02:47:44 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.684 02:47:44 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.684 02:47:44 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.684 02:47:44 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.684 02:47:44 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.684 02:47:44 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.684 02:47:44 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.684 02:47:44 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.684 02:47:44 version -- scripts/common.sh@344 -- # case "$op" in 00:06:29.684 02:47:44 version -- scripts/common.sh@345 -- # : 1 00:06:29.684 02:47:44 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.684 02:47:44 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.684 02:47:44 version -- scripts/common.sh@365 -- # decimal 1 00:06:29.684 02:47:44 version -- scripts/common.sh@353 -- # local d=1 00:06:29.684 02:47:44 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.684 02:47:44 version -- scripts/common.sh@355 -- # echo 1 00:06:29.684 02:47:44 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.684 02:47:44 version -- scripts/common.sh@366 -- # decimal 2 00:06:29.684 02:47:44 version -- scripts/common.sh@353 -- # local d=2 00:06:29.684 02:47:44 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.684 02:47:44 version -- scripts/common.sh@355 -- # echo 2 00:06:29.684 02:47:44 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.684 02:47:44 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.684 02:47:44 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.684 02:47:44 version -- scripts/common.sh@368 -- # return 0 00:06:29.684 02:47:44 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.684 02:47:44 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:29.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.684 --rc genhtml_branch_coverage=1 00:06:29.684 --rc genhtml_function_coverage=1 00:06:29.684 --rc genhtml_legend=1 00:06:29.684 --rc geninfo_all_blocks=1 00:06:29.684 --rc geninfo_unexecuted_blocks=1 00:06:29.684 00:06:29.684 ' 00:06:29.684 02:47:44 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:29.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.684 --rc genhtml_branch_coverage=1 00:06:29.684 --rc genhtml_function_coverage=1 00:06:29.684 --rc genhtml_legend=1 00:06:29.684 --rc geninfo_all_blocks=1 00:06:29.684 --rc geninfo_unexecuted_blocks=1 00:06:29.684 00:06:29.684 ' 00:06:29.684 02:47:44 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:29.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.684 --rc genhtml_branch_coverage=1 00:06:29.684 --rc genhtml_function_coverage=1 00:06:29.684 --rc genhtml_legend=1 00:06:29.684 --rc geninfo_all_blocks=1 00:06:29.684 --rc geninfo_unexecuted_blocks=1 00:06:29.684 00:06:29.684 ' 00:06:29.684 02:47:44 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:29.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.684 --rc genhtml_branch_coverage=1 00:06:29.684 --rc genhtml_function_coverage=1 00:06:29.684 --rc genhtml_legend=1 00:06:29.684 --rc geninfo_all_blocks=1 00:06:29.684 --rc geninfo_unexecuted_blocks=1 00:06:29.684 00:06:29.684 ' 00:06:29.684 02:47:44 version -- app/version.sh@17 -- # get_header_version major 00:06:29.684 02:47:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:29.684 02:47:44 version -- app/version.sh@14 -- # cut -f2 00:06:29.684 02:47:44 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.684 02:47:44 version -- app/version.sh@17 -- # major=25 00:06:29.684 02:47:44 version -- app/version.sh@18 -- # get_header_version minor 00:06:29.684 02:47:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:29.684 02:47:44 version -- app/version.sh@14 -- # cut -f2 00:06:29.684 02:47:44 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.684 02:47:44 version -- app/version.sh@18 -- # minor=1 00:06:29.684 02:47:44 version -- app/version.sh@19 -- # get_header_version patch 00:06:29.684 02:47:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:29.684 02:47:44 version -- app/version.sh@14 -- # cut -f2 00:06:29.684 02:47:44 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.684 02:47:44 version -- app/version.sh@19 -- # patch=0 00:06:29.684 02:47:44 version -- app/version.sh@20 -- # get_header_version suffix 00:06:29.684 02:47:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:29.684 02:47:44 version -- app/version.sh@14 -- # cut -f2 00:06:29.684 02:47:44 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.684 02:47:44 version -- app/version.sh@20 -- # suffix=-pre 00:06:29.684 02:47:44 version -- app/version.sh@22 -- # version=25.1 00:06:29.684 02:47:44 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:29.684 02:47:44 version -- app/version.sh@28 -- # version=25.1rc0 00:06:29.684 02:47:44 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:29.684 02:47:44 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:29.684 02:47:44 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:29.684 02:47:44 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:29.684 00:06:29.684 real 0m0.255s 00:06:29.684 user 0m0.143s 00:06:29.684 sys 0m0.156s 00:06:29.684 02:47:44 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.684 02:47:44 version -- common/autotest_common.sh@10 -- # set +x 00:06:29.684 ************************************ 00:06:29.684 END TEST version 00:06:29.684 ************************************ 00:06:29.684 02:47:44 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:29.684 02:47:44 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:29.684 02:47:44 -- spdk/autotest.sh@194 -- # uname -s 00:06:29.684 02:47:44 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:29.684 02:47:44 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:29.684 02:47:44 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:29.684 02:47:44 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:29.684 02:47:44 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:29.684 02:47:44 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:29.684 02:47:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:29.684 02:47:44 -- common/autotest_common.sh@10 -- # set +x 00:06:29.684 02:47:44 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:29.684 02:47:44 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:29.684 02:47:44 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:29.684 02:47:44 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:29.684 02:47:44 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:29.684 02:47:44 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:29.684 02:47:44 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:29.684 02:47:44 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:29.684 02:47:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.684 02:47:44 -- common/autotest_common.sh@10 -- # set +x 00:06:29.685 ************************************ 00:06:29.685 START TEST nvmf_tcp 00:06:29.685 ************************************ 00:06:29.685 02:47:44 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:29.944 * Looking for test storage... 00:06:29.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:29.944 02:47:44 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:29.944 02:47:44 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:29.944 02:47:44 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:29.944 02:47:44 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:29.944 02:47:44 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.944 02:47:44 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.944 02:47:44 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.944 02:47:44 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.944 02:47:44 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.944 02:47:44 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.944 02:47:44 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.944 02:47:44 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.944 02:47:44 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.944 02:47:44 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.944 02:47:44 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.944 02:47:44 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:29.944 02:47:44 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:29.944 02:47:44 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.944 02:47:44 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.944 02:47:44 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:29.945 02:47:44 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:29.945 02:47:44 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.945 02:47:44 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:29.945 02:47:44 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.945 02:47:44 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:29.945 02:47:44 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:29.945 02:47:44 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.945 02:47:44 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:29.945 02:47:44 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.945 02:47:44 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.945 02:47:44 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.945 02:47:44 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:29.945 02:47:44 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.945 02:47:44 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:29.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.945 --rc genhtml_branch_coverage=1 00:06:29.945 --rc genhtml_function_coverage=1 00:06:29.945 --rc genhtml_legend=1 00:06:29.945 --rc geninfo_all_blocks=1 00:06:29.945 --rc geninfo_unexecuted_blocks=1 00:06:29.945 00:06:29.945 ' 00:06:29.945 02:47:44 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:29.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.945 --rc genhtml_branch_coverage=1 00:06:29.945 --rc genhtml_function_coverage=1 00:06:29.945 --rc genhtml_legend=1 00:06:29.945 --rc geninfo_all_blocks=1 00:06:29.945 --rc geninfo_unexecuted_blocks=1 00:06:29.945 00:06:29.945 ' 00:06:29.945 02:47:44 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:29.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.945 --rc genhtml_branch_coverage=1 00:06:29.945 --rc genhtml_function_coverage=1 00:06:29.945 --rc genhtml_legend=1 00:06:29.945 --rc geninfo_all_blocks=1 00:06:29.945 --rc geninfo_unexecuted_blocks=1 00:06:29.945 00:06:29.945 ' 00:06:29.945 02:47:44 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:29.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.945 --rc genhtml_branch_coverage=1 00:06:29.945 --rc genhtml_function_coverage=1 00:06:29.945 --rc genhtml_legend=1 00:06:29.945 --rc geninfo_all_blocks=1 00:06:29.945 --rc geninfo_unexecuted_blocks=1 00:06:29.945 00:06:29.945 ' 00:06:29.945 02:47:44 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:29.945 02:47:44 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:29.945 02:47:44 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:29.945 02:47:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:29.945 02:47:44 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.945 02:47:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:29.945 ************************************ 00:06:29.945 START TEST nvmf_target_core 00:06:29.945 ************************************ 00:06:29.945 02:47:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:29.945 * Looking for test storage... 00:06:30.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.207 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:30.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.208 --rc genhtml_branch_coverage=1 00:06:30.208 --rc genhtml_function_coverage=1 00:06:30.208 --rc genhtml_legend=1 00:06:30.208 --rc geninfo_all_blocks=1 00:06:30.208 --rc geninfo_unexecuted_blocks=1 00:06:30.208 00:06:30.208 ' 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:30.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.208 --rc genhtml_branch_coverage=1 00:06:30.208 --rc genhtml_function_coverage=1 00:06:30.208 --rc genhtml_legend=1 00:06:30.208 --rc geninfo_all_blocks=1 00:06:30.208 --rc geninfo_unexecuted_blocks=1 00:06:30.208 00:06:30.208 ' 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:30.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.208 --rc genhtml_branch_coverage=1 00:06:30.208 --rc genhtml_function_coverage=1 00:06:30.208 --rc genhtml_legend=1 00:06:30.208 --rc geninfo_all_blocks=1 00:06:30.208 --rc geninfo_unexecuted_blocks=1 00:06:30.208 00:06:30.208 ' 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:30.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.208 --rc genhtml_branch_coverage=1 00:06:30.208 --rc genhtml_function_coverage=1 00:06:30.208 --rc genhtml_legend=1 00:06:30.208 --rc geninfo_all_blocks=1 00:06:30.208 --rc geninfo_unexecuted_blocks=1 00:06:30.208 00:06:30.208 ' 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:30.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:30.208 ************************************ 00:06:30.208 START TEST nvmf_abort 00:06:30.208 ************************************ 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:30.208 * Looking for test storage... 00:06:30.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:30.208 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:30.467 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:30.467 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:30.467 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.467 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.467 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.467 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.467 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.467 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.467 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.467 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.467 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.467 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.467 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.467 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:30.467 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:30.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.468 --rc genhtml_branch_coverage=1 00:06:30.468 --rc genhtml_function_coverage=1 00:06:30.468 --rc genhtml_legend=1 00:06:30.468 --rc geninfo_all_blocks=1 00:06:30.468 --rc geninfo_unexecuted_blocks=1 00:06:30.468 00:06:30.468 ' 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:30.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.468 --rc genhtml_branch_coverage=1 00:06:30.468 --rc genhtml_function_coverage=1 00:06:30.468 --rc genhtml_legend=1 00:06:30.468 --rc geninfo_all_blocks=1 00:06:30.468 --rc geninfo_unexecuted_blocks=1 00:06:30.468 00:06:30.468 ' 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:30.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.468 --rc genhtml_branch_coverage=1 00:06:30.468 --rc genhtml_function_coverage=1 00:06:30.468 --rc genhtml_legend=1 00:06:30.468 --rc geninfo_all_blocks=1 00:06:30.468 --rc geninfo_unexecuted_blocks=1 00:06:30.468 00:06:30.468 ' 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:30.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.468 --rc genhtml_branch_coverage=1 00:06:30.468 --rc genhtml_function_coverage=1 00:06:30.468 --rc genhtml_legend=1 00:06:30.468 --rc geninfo_all_blocks=1 00:06:30.468 --rc geninfo_unexecuted_blocks=1 00:06:30.468 00:06:30.468 ' 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:30.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:30.468 02:47:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:37.058 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:37.058 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:37.059 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:37.059 Found net devices under 0000:af:00.0: cvl_0_0 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:37.059 Found net devices under 0000:af:00.1: cvl_0_1 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:37.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:37.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:06:37.059 00:06:37.059 --- 10.0.0.2 ping statistics --- 00:06:37.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.059 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:37.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:37.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:06:37.059 00:06:37.059 --- 10.0.0.1 ping statistics --- 00:06:37.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.059 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:37.059 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=122647 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 122647 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 122647 ']' 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:37.060 [2024-12-14 02:47:51.537367] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:37.060 [2024-12-14 02:47:51.537407] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:37.060 [2024-12-14 02:47:51.612360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.060 [2024-12-14 02:47:51.635869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:37.060 [2024-12-14 02:47:51.635904] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:37.060 [2024-12-14 02:47:51.635911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:37.060 [2024-12-14 02:47:51.635917] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:37.060 [2024-12-14 02:47:51.635922] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:37.060 [2024-12-14 02:47:51.637257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.060 [2024-12-14 02:47:51.637364] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.060 [2024-12-14 02:47:51.637365] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:37.060 [2024-12-14 02:47:51.769175] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:37.060 Malloc0 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:37.060 Delay0 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:37.060 [2024-12-14 02:47:51.859458] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.060 02:47:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:37.060 [2024-12-14 02:47:51.992094] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:38.967 Initializing NVMe Controllers 00:06:38.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:38.967 controller IO queue size 128 less than required 00:06:38.967 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:38.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:38.967 Initialization complete. Launching workers. 00:06:38.967 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 39861 00:06:38.967 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 39922, failed to submit 62 00:06:38.967 success 39865, unsuccessful 57, failed 0 00:06:38.967 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:38.967 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.967 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.967 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.967 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:38.967 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:38.967 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:38.967 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:38.967 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:38.967 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:38.967 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:38.967 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:38.967 rmmod nvme_tcp 00:06:39.227 rmmod nvme_fabrics 00:06:39.227 rmmod nvme_keyring 00:06:39.227 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:39.227 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:39.227 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:39.227 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 122647 ']' 00:06:39.227 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 122647 00:06:39.227 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 122647 ']' 00:06:39.227 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 122647 00:06:39.227 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:39.227 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.227 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122647 00:06:39.227 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:39.227 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:39.227 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122647' 00:06:39.227 killing process with pid 122647 00:06:39.227 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 122647 00:06:39.227 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 122647 00:06:39.487 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:39.487 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:39.487 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:39.487 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:39.487 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:39.487 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:39.487 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:39.487 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:39.487 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:39.487 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:39.487 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:39.487 02:47:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.395 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:41.395 00:06:41.395 real 0m11.206s 00:06:41.395 user 0m11.871s 00:06:41.395 sys 0m5.145s 00:06:41.395 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.395 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:41.395 ************************************ 00:06:41.395 END TEST nvmf_abort 00:06:41.395 ************************************ 00:06:41.395 02:47:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:41.395 02:47:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:41.395 02:47:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.395 02:47:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:41.395 ************************************ 00:06:41.395 START TEST nvmf_ns_hotplug_stress 00:06:41.395 ************************************ 00:06:41.395 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:41.655 * Looking for test storage... 00:06:41.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:41.655 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:41.655 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:41.655 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:41.655 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:41.655 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.655 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.655 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.655 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.655 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.655 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.655 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.655 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.655 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.655 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.655 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.655 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:41.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.656 --rc genhtml_branch_coverage=1 00:06:41.656 --rc genhtml_function_coverage=1 00:06:41.656 --rc genhtml_legend=1 00:06:41.656 --rc geninfo_all_blocks=1 00:06:41.656 --rc geninfo_unexecuted_blocks=1 00:06:41.656 00:06:41.656 ' 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:41.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.656 --rc genhtml_branch_coverage=1 00:06:41.656 --rc genhtml_function_coverage=1 00:06:41.656 --rc genhtml_legend=1 00:06:41.656 --rc geninfo_all_blocks=1 00:06:41.656 --rc geninfo_unexecuted_blocks=1 00:06:41.656 00:06:41.656 ' 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:41.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.656 --rc genhtml_branch_coverage=1 00:06:41.656 --rc genhtml_function_coverage=1 00:06:41.656 --rc genhtml_legend=1 00:06:41.656 --rc geninfo_all_blocks=1 00:06:41.656 --rc geninfo_unexecuted_blocks=1 00:06:41.656 00:06:41.656 ' 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:41.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.656 --rc genhtml_branch_coverage=1 00:06:41.656 --rc genhtml_function_coverage=1 00:06:41.656 --rc genhtml_legend=1 00:06:41.656 --rc geninfo_all_blocks=1 00:06:41.656 --rc geninfo_unexecuted_blocks=1 00:06:41.656 00:06:41.656 ' 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:41.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:41.656 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:41.657 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:41.657 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:41.657 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.657 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:41.657 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.657 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:41.657 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:41.657 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:41.657 02:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:48.231 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:48.231 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:48.231 Found net devices under 0000:af:00.0: cvl_0_0 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:48.231 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:48.232 Found net devices under 0000:af:00.1: cvl_0_1 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:48.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:48.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:06:48.232 00:06:48.232 --- 10.0.0.2 ping statistics --- 00:06:48.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.232 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:48.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:48.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:06:48.232 00:06:48.232 --- 10.0.0.1 ping statistics --- 00:06:48.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.232 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=126725 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 126725 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 126725 ']' 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:48.232 [2024-12-14 02:48:02.773965] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:48.232 [2024-12-14 02:48:02.774016] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.232 [2024-12-14 02:48:02.855731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.232 [2024-12-14 02:48:02.878445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:48.232 [2024-12-14 02:48:02.878481] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:48.232 [2024-12-14 02:48:02.878488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:48.232 [2024-12-14 02:48:02.878494] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:48.232 [2024-12-14 02:48:02.878499] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:48.232 [2024-12-14 02:48:02.879745] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.232 [2024-12-14 02:48:02.879843] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.232 [2024-12-14 02:48:02.879843] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:48.232 02:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:48.232 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:48.232 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:48.233 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:48.233 [2024-12-14 02:48:03.179054] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:48.233 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:48.492 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:48.492 [2024-12-14 02:48:03.584517] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:48.492 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:48.750 02:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:49.010 Malloc0 00:06:49.010 02:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:49.268 Delay0 00:06:49.268 02:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.528 02:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:49.528 NULL1 00:06:49.528 02:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:49.787 02:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:49.787 02:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=126988 00:06:49.787 02:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:06:49.787 02:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.166 Read completed with error (sct=0, sc=11) 00:06:51.166 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.166 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:51.166 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:51.425 true 00:06:51.425 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:06:51.425 02:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.363 02:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.363 02:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:52.363 02:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:52.621 true 00:06:52.621 02:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:06:52.621 02:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.880 02:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.140 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:53.140 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:53.140 true 00:06:53.140 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:06:53.140 02:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.519 02:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.520 02:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:54.520 02:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:54.520 true 00:06:54.779 02:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:06:54.779 02:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.779 02:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.038 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:55.038 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:55.298 true 00:06:55.298 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:06:55.298 02:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.237 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.237 02:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.237 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.237 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.497 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.497 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.497 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.497 02:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:56.497 02:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:56.756 true 00:06:56.756 02:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:06:56.756 02:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.696 02:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.696 02:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:57.696 02:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:57.955 true 00:06:57.955 02:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:06:57.955 02:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.215 02:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.215 02:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:58.215 02:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:58.474 true 00:06:58.474 02:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:06:58.474 02:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.856 02:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.856 02:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:59.856 02:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:00.116 true 00:07:00.116 02:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:07:00.116 02:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.116 02:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.374 02:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:00.375 02:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:00.634 true 00:07:00.634 02:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:07:00.634 02:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.833 02:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.833 02:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:01.833 02:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:02.092 true 00:07:02.092 02:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:07:02.093 02:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.032 02:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.032 02:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:03.032 02:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:03.291 true 00:07:03.291 02:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:07:03.291 02:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.550 02:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.810 02:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:03.810 02:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:03.810 true 00:07:04.069 02:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:07:04.069 02:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.008 02:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.269 02:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:05.269 02:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:05.269 true 00:07:05.269 02:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:07:05.269 02:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.529 02:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.789 02:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:05.789 02:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:06.049 true 00:07:06.049 02:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:07:06.049 02:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.988 02:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.247 02:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:07.247 02:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:07.507 true 00:07:07.507 02:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:07:07.507 02:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.986 02:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.986 02:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:07.986 02:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:07.986 true 00:07:07.986 02:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:07:07.986 02:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.365 02:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.365 02:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:09.365 02:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:09.625 true 00:07:09.625 02:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:07:09.625 02:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.884 02:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.884 02:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:09.884 02:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:10.142 true 00:07:10.142 02:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:07:10.142 02:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.401 02:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.660 02:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:10.660 02:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:10.919 true 00:07:10.919 02:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:07:10.919 02:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.864 02:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.864 02:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:11.864 02:48:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:12.124 true 00:07:12.124 02:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:07:12.124 02:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.124 02:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.383 02:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:12.383 02:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:12.642 true 00:07:12.642 02:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:07:12.642 02:48:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.580 02:48:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.840 02:48:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:13.840 02:48:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:14.099 true 00:07:14.099 02:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:07:14.099 02:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.359 02:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.618 02:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:14.618 02:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:14.618 true 00:07:14.618 02:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:07:14.618 02:48:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.000 02:48:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.000 02:48:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:16.000 02:48:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:16.260 true 00:07:16.260 02:48:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:07:16.260 02:48:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.199 02:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.199 02:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:17.199 02:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:17.459 true 00:07:17.459 02:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:07:17.459 02:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.719 02:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.979 02:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:17.979 02:48:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:17.979 true 00:07:17.979 02:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:07:17.979 02:48:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.359 02:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.359 02:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:19.359 02:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:19.618 true 00:07:19.619 02:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:07:19.619 02:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.878 02:48:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.878 02:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:19.878 02:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:20.142 Initializing NVMe Controllers 00:07:20.142 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:20.142 Controller IO queue size 128, less than required. 00:07:20.142 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:20.142 Controller IO queue size 128, less than required. 00:07:20.142 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:20.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:20.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:20.142 Initialization complete. Launching workers. 00:07:20.142 ======================================================== 00:07:20.142 Latency(us) 00:07:20.142 Device Information : IOPS MiB/s Average min max 00:07:20.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1561.68 0.76 49905.08 2065.62 1011432.03 00:07:20.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15986.57 7.81 8006.94 2295.33 369403.76 00:07:20.142 ======================================================== 00:07:20.142 Total : 17548.26 8.57 11735.60 2065.62 1011432.03 00:07:20.142 00:07:20.142 true 00:07:20.142 02:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 126988 00:07:20.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (126988) - No such process 00:07:20.142 02:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 126988 00:07:20.142 02:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.405 02:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.664 02:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:20.664 02:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:20.664 02:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:20.664 02:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.664 02:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:20.664 null0 00:07:20.924 02:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.924 02:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.924 02:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:20.924 null1 00:07:20.924 02:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.924 02:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.924 02:48:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:21.184 null2 00:07:21.184 02:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:21.184 02:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:21.184 02:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:21.444 null3 00:07:21.444 02:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:21.444 02:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:21.444 02:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:21.444 null4 00:07:21.703 02:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:21.703 02:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:21.703 02:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:21.703 null5 00:07:21.703 02:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:21.703 02:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:21.703 02:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:21.963 null6 00:07:21.963 02:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:21.963 02:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:21.963 02:48:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:22.223 null7 00:07:22.223 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:22.223 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:22.223 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:22.223 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:22.223 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:22.223 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:22.223 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:22.223 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:22.223 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:22.223 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:22.223 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.223 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:22.223 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:22.223 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 132839 132840 132843 132844 132846 132848 132850 132851 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.224 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:22.484 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:22.484 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.484 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:22.484 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:22.484 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:22.484 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.484 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:22.484 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:22.484 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.484 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.484 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:22.484 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.484 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.484 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:22.484 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.484 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.485 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:22.485 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.485 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.485 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:22.744 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.744 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.744 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.744 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.745 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:22.745 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:22.745 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.745 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.745 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:22.745 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.745 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.745 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:22.745 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:22.745 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.745 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.745 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:22.745 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:22.745 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:22.745 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:22.745 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:23.005 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.005 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.005 02:48:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:23.005 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.005 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.005 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:23.005 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.005 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.005 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:23.005 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.005 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.005 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:23.005 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.005 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.005 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:23.005 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.005 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.005 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:23.005 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.005 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.005 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:23.005 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.005 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.005 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:23.265 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.265 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.265 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:23.265 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:23.265 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:23.265 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:23.265 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:23.265 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:23.525 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.525 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.525 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:23.526 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.785 02:48:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:24.045 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:24.045 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:24.045 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:24.045 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:24.045 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.045 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:24.045 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:24.045 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:24.305 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.305 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.305 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:24.305 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.305 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.305 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:24.305 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.305 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.305 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:24.306 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.306 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.306 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:24.306 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.306 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.306 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.306 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.306 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:24.306 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:24.306 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.306 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.306 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:24.306 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.306 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.306 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.566 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:24.825 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:24.825 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.825 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:24.825 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:24.825 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:24.826 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:24.826 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:24.826 02:48:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.085 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:25.345 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:25.345 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:25.345 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.345 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.345 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:25.345 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:25.345 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:25.345 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.606 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:25.866 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:25.866 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.866 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.866 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:25.866 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.866 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.866 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:25.866 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.866 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.866 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:25.866 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.866 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.867 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:25.867 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.867 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.867 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:25.867 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.867 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.867 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:25.867 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.867 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.867 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:25.867 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.867 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.867 02:48:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:26.127 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.127 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:26.127 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:26.127 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:26.127 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:26.127 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:26.127 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:26.127 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:26.387 rmmod nvme_tcp 00:07:26.387 rmmod nvme_fabrics 00:07:26.387 rmmod nvme_keyring 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:26.387 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:26.388 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:26.388 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 126725 ']' 00:07:26.388 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 126725 00:07:26.388 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 126725 ']' 00:07:26.388 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 126725 00:07:26.388 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:26.388 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.388 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126725 00:07:26.388 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:26.388 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:26.388 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126725' 00:07:26.388 killing process with pid 126725 00:07:26.388 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 126725 00:07:26.388 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 126725 00:07:26.648 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:26.648 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:26.648 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:26.648 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:26.648 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:26.648 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:26.648 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:26.648 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:26.648 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:26.648 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.648 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.648 02:48:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:29.190 00:07:29.190 real 0m47.175s 00:07:29.190 user 3m13.470s 00:07:29.190 sys 0m14.841s 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:29.190 ************************************ 00:07:29.190 END TEST nvmf_ns_hotplug_stress 00:07:29.190 ************************************ 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:29.190 ************************************ 00:07:29.190 START TEST nvmf_delete_subsystem 00:07:29.190 ************************************ 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:29.190 * Looking for test storage... 00:07:29.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:29.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.190 --rc genhtml_branch_coverage=1 00:07:29.190 --rc genhtml_function_coverage=1 00:07:29.190 --rc genhtml_legend=1 00:07:29.190 --rc geninfo_all_blocks=1 00:07:29.190 --rc geninfo_unexecuted_blocks=1 00:07:29.190 00:07:29.190 ' 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:29.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.190 --rc genhtml_branch_coverage=1 00:07:29.190 --rc genhtml_function_coverage=1 00:07:29.190 --rc genhtml_legend=1 00:07:29.190 --rc geninfo_all_blocks=1 00:07:29.190 --rc geninfo_unexecuted_blocks=1 00:07:29.190 00:07:29.190 ' 00:07:29.190 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:29.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.191 --rc genhtml_branch_coverage=1 00:07:29.191 --rc genhtml_function_coverage=1 00:07:29.191 --rc genhtml_legend=1 00:07:29.191 --rc geninfo_all_blocks=1 00:07:29.191 --rc geninfo_unexecuted_blocks=1 00:07:29.191 00:07:29.191 ' 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:29.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.191 --rc genhtml_branch_coverage=1 00:07:29.191 --rc genhtml_function_coverage=1 00:07:29.191 --rc genhtml_legend=1 00:07:29.191 --rc geninfo_all_blocks=1 00:07:29.191 --rc geninfo_unexecuted_blocks=1 00:07:29.191 00:07:29.191 ' 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:29.191 02:48:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:35.772 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:35.772 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:35.772 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:35.773 Found net devices under 0000:af:00.0: cvl_0_0 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:35.773 Found net devices under 0000:af:00.1: cvl_0_1 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:35.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:07:35.773 00:07:35.773 --- 10.0.0.2 ping statistics --- 00:07:35.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.773 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:07:35.773 00:07:35.773 --- 10.0.0.1 ping statistics --- 00:07:35.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.773 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=137359 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 137359 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 137359 ']' 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.773 02:48:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.773 [2024-12-14 02:48:49.982997] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:35.773 [2024-12-14 02:48:49.983047] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.773 [2024-12-14 02:48:50.063502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:35.773 [2024-12-14 02:48:50.086431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.773 [2024-12-14 02:48:50.086469] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.773 [2024-12-14 02:48:50.086478] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.773 [2024-12-14 02:48:50.086484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.773 [2024-12-14 02:48:50.086490] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.773 [2024-12-14 02:48:50.087601] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.773 [2024-12-14 02:48:50.087601] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.774 [2024-12-14 02:48:50.227156] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.774 [2024-12-14 02:48:50.247366] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.774 NULL1 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.774 Delay0 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=137379 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:35.774 02:48:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:35.774 [2024-12-14 02:48:50.358340] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:37.154 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:37.414 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.414 02:48:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 starting I/O failed: -6 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 starting I/O failed: -6 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 starting I/O failed: -6 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 starting I/O failed: -6 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 starting I/O failed: -6 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 starting I/O failed: -6 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 starting I/O failed: -6 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 starting I/O failed: -6 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 starting I/O failed: -6 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 starting I/O failed: -6 00:07:37.414 [2024-12-14 02:48:52.476711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd60f70 is same with the state(6) to be set 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 starting I/O failed: -6 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 starting I/O failed: -6 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 starting I/O failed: -6 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 starting I/O failed: -6 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.414 Write completed with error (sct=0, sc=8) 00:07:37.414 Read completed with error (sct=0, sc=8) 00:07:37.415 Read completed with error (sct=0, sc=8) 00:07:37.415 starting I/O failed: -6 00:07:37.415 Write completed with error (sct=0, sc=8) 00:07:37.415 Read completed with error (sct=0, sc=8) 00:07:37.415 Read completed with error (sct=0, sc=8) 00:07:37.415 Read completed with error (sct=0, sc=8) 00:07:37.415 starting I/O failed: -6 00:07:37.415 Read completed with error (sct=0, sc=8) 00:07:37.415 Read completed with error (sct=0, sc=8) 00:07:37.415 Write completed with error (sct=0, sc=8) 00:07:37.415 Read completed with error (sct=0, sc=8) 00:07:37.415 starting I/O failed: -6 00:07:37.415 Read completed with error (sct=0, sc=8) 00:07:37.415 Write completed with error (sct=0, sc=8) 00:07:37.415 Write completed with error (sct=0, sc=8) 00:07:37.415 Read completed with error (sct=0, sc=8) 00:07:37.415 starting I/O failed: -6 00:07:37.415 Read completed with error (sct=0, sc=8) 00:07:37.415 Read completed with error (sct=0, sc=8) 00:07:37.415 Read completed with error (sct=0, sc=8) 00:07:37.415 Read completed with error (sct=0, sc=8) 00:07:37.415 starting I/O failed: -6 00:07:37.415 Write completed with error (sct=0, sc=8) 00:07:37.415 Read completed with error (sct=0, sc=8) 00:07:37.415 Read completed with error (sct=0, sc=8) 00:07:37.415 Write completed with error (sct=0, sc=8) 00:07:37.415 starting I/O failed: -6 00:07:37.415 Write completed with error (sct=0, sc=8) 00:07:37.415 Read completed with error (sct=0, sc=8) 00:07:37.415 Read completed with error (sct=0, sc=8) 00:07:37.415 Read completed with error (sct=0, sc=8) 00:07:37.415 starting I/O failed: -6 00:07:37.415 Read completed with error (sct=0, sc=8) 00:07:37.415 Read completed with error (sct=0, sc=8) 00:07:37.415 Read completed with error (sct=0, sc=8) 00:07:37.415 Read completed with error (sct=0, sc=8) 00:07:37.415 starting I/O failed: -6 00:07:38.354 [2024-12-14 02:48:53.451130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5f190 is same with the state(6) to be set 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 [2024-12-14 02:48:53.479946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6ea000d800 is same with the state(6) to be set 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 [2024-12-14 02:48:53.480438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6ea000d060 is same with the state(6) to be set 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 [2024-12-14 02:48:53.480597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6ea0000c80 is same with the state(6) to be set 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Read completed with error (sct=0, sc=8) 00:07:38.354 Write completed with error (sct=0, sc=8) 00:07:38.355 Read completed with error (sct=0, sc=8) 00:07:38.355 Read completed with error (sct=0, sc=8) 00:07:38.355 Write completed with error (sct=0, sc=8) 00:07:38.355 Read completed with error (sct=0, sc=8) 00:07:38.355 Write completed with error (sct=0, sc=8) 00:07:38.355 Read completed with error (sct=0, sc=8) 00:07:38.355 Write completed with error (sct=0, sc=8) 00:07:38.355 [2024-12-14 02:48:53.481161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd615e0 is same with the state(6) to be set 00:07:38.355 Initializing NVMe Controllers 00:07:38.355 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:38.355 Controller IO queue size 128, less than required. 00:07:38.355 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:38.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:38.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:38.355 Initialization complete. Launching workers. 00:07:38.355 ======================================================== 00:07:38.355 Latency(us) 00:07:38.355 Device Information : IOPS MiB/s Average min max 00:07:38.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 154.60 0.08 879060.29 263.12 1009621.78 00:07:38.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 174.48 0.09 1063750.94 365.04 2001857.00 00:07:38.355 ======================================================== 00:07:38.355 Total : 329.08 0.16 976985.40 263.12 2001857.00 00:07:38.355 00:07:38.355 [2024-12-14 02:48:53.481956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5f190 (9): Bad file descriptor 00:07:38.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:38.355 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.355 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:38.355 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 137379 00:07:38.355 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:38.923 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:38.923 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 137379 00:07:38.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (137379) - No such process 00:07:38.923 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 137379 00:07:38.923 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:38.923 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 137379 00:07:38.923 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:38.923 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.923 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:38.923 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.923 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 137379 00:07:38.923 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:38.923 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:38.923 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:38.923 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:38.923 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:38.923 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.923 02:48:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.923 02:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.923 02:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:38.923 02:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.923 02:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.923 [2024-12-14 02:48:54.012512] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.923 02:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.923 02:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.923 02:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.923 02:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.923 02:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.923 02:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=138056 00:07:38.923 02:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:38.923 02:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:38.923 02:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 138056 00:07:38.923 02:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:39.182 [2024-12-14 02:48:54.100848] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:39.442 02:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:39.442 02:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 138056 00:07:39.442 02:48:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:40.010 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:40.010 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 138056 00:07:40.010 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:40.578 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:40.578 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 138056 00:07:40.578 02:48:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:41.148 02:48:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:41.148 02:48:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 138056 00:07:41.148 02:48:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:41.717 02:48:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:41.717 02:48:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 138056 00:07:41.717 02:48:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:41.976 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:41.976 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 138056 00:07:41.976 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:42.236 Initializing NVMe Controllers 00:07:42.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:42.236 Controller IO queue size 128, less than required. 00:07:42.236 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:42.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:42.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:42.236 Initialization complete. Launching workers. 00:07:42.236 ======================================================== 00:07:42.236 Latency(us) 00:07:42.236 Device Information : IOPS MiB/s Average min max 00:07:42.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002118.49 1000124.60 1040819.83 00:07:42.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003902.87 1000173.59 1010022.40 00:07:42.236 ======================================================== 00:07:42.236 Total : 256.00 0.12 1003010.68 1000124.60 1040819.83 00:07:42.236 00:07:42.496 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:42.496 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 138056 00:07:42.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (138056) - No such process 00:07:42.496 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 138056 00:07:42.496 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:42.496 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:42.496 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:42.496 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:42.496 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:42.496 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:42.496 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:42.496 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:42.496 rmmod nvme_tcp 00:07:42.496 rmmod nvme_fabrics 00:07:42.496 rmmod nvme_keyring 00:07:42.496 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 137359 ']' 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 137359 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 137359 ']' 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 137359 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 137359 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 137359' 00:07:42.756 killing process with pid 137359 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 137359 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 137359 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.756 02:48:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.298 02:48:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:45.298 00:07:45.298 real 0m16.146s 00:07:45.298 user 0m29.324s 00:07:45.298 sys 0m5.350s 00:07:45.298 02:48:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.298 02:48:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.298 ************************************ 00:07:45.298 END TEST nvmf_delete_subsystem 00:07:45.298 ************************************ 00:07:45.298 02:48:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:45.298 02:48:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.298 02:48:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.298 02:48:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:45.298 ************************************ 00:07:45.298 START TEST nvmf_host_management 00:07:45.298 ************************************ 00:07:45.298 02:48:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:45.298 * Looking for test storage... 00:07:45.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:45.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.298 --rc genhtml_branch_coverage=1 00:07:45.298 --rc genhtml_function_coverage=1 00:07:45.298 --rc genhtml_legend=1 00:07:45.298 --rc geninfo_all_blocks=1 00:07:45.298 --rc geninfo_unexecuted_blocks=1 00:07:45.298 00:07:45.298 ' 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:45.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.298 --rc genhtml_branch_coverage=1 00:07:45.298 --rc genhtml_function_coverage=1 00:07:45.298 --rc genhtml_legend=1 00:07:45.298 --rc geninfo_all_blocks=1 00:07:45.298 --rc geninfo_unexecuted_blocks=1 00:07:45.298 00:07:45.298 ' 00:07:45.298 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:45.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.298 --rc genhtml_branch_coverage=1 00:07:45.298 --rc genhtml_function_coverage=1 00:07:45.298 --rc genhtml_legend=1 00:07:45.298 --rc geninfo_all_blocks=1 00:07:45.298 --rc geninfo_unexecuted_blocks=1 00:07:45.298 00:07:45.299 ' 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:45.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.299 --rc genhtml_branch_coverage=1 00:07:45.299 --rc genhtml_function_coverage=1 00:07:45.299 --rc genhtml_legend=1 00:07:45.299 --rc geninfo_all_blocks=1 00:07:45.299 --rc geninfo_unexecuted_blocks=1 00:07:45.299 00:07:45.299 ' 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:45.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:45.299 02:49:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:51.879 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:51.879 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:51.880 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:51.880 Found net devices under 0000:af:00.0: cvl_0_0 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:51.880 Found net devices under 0000:af:00.1: cvl_0_1 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:51.880 02:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:51.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:07:51.880 00:07:51.880 --- 10.0.0.2 ping statistics --- 00:07:51.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.880 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:51.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:07:51.880 00:07:51.880 --- 10.0.0.1 ping statistics --- 00:07:51.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.880 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=142142 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 142142 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 142142 ']' 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.880 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.880 [2024-12-14 02:49:06.302453] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:51.880 [2024-12-14 02:49:06.302499] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.880 [2024-12-14 02:49:06.379397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.880 [2024-12-14 02:49:06.403484] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.880 [2024-12-14 02:49:06.403520] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.880 [2024-12-14 02:49:06.403527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.881 [2024-12-14 02:49:06.403532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.881 [2024-12-14 02:49:06.403537] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.881 [2024-12-14 02:49:06.404967] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.881 [2024-12-14 02:49:06.405075] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.881 [2024-12-14 02:49:06.405182] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.881 [2024-12-14 02:49:06.405184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.881 [2024-12-14 02:49:06.533349] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.881 Malloc0 00:07:51.881 [2024-12-14 02:49:06.600672] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=142254 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 142254 /var/tmp/bdevperf.sock 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 142254 ']' 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:51.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:51.881 { 00:07:51.881 "params": { 00:07:51.881 "name": "Nvme$subsystem", 00:07:51.881 "trtype": "$TEST_TRANSPORT", 00:07:51.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:51.881 "adrfam": "ipv4", 00:07:51.881 "trsvcid": "$NVMF_PORT", 00:07:51.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:51.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:51.881 "hdgst": ${hdgst:-false}, 00:07:51.881 "ddgst": ${ddgst:-false} 00:07:51.881 }, 00:07:51.881 "method": "bdev_nvme_attach_controller" 00:07:51.881 } 00:07:51.881 EOF 00:07:51.881 )") 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:51.881 02:49:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:51.881 "params": { 00:07:51.881 "name": "Nvme0", 00:07:51.881 "trtype": "tcp", 00:07:51.881 "traddr": "10.0.0.2", 00:07:51.881 "adrfam": "ipv4", 00:07:51.881 "trsvcid": "4420", 00:07:51.881 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:51.881 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:51.881 "hdgst": false, 00:07:51.881 "ddgst": false 00:07:51.881 }, 00:07:51.881 "method": "bdev_nvme_attach_controller" 00:07:51.881 }' 00:07:51.881 [2024-12-14 02:49:06.696708] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:51.881 [2024-12-14 02:49:06.696753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142254 ] 00:07:51.881 [2024-12-14 02:49:06.772727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.881 [2024-12-14 02:49:06.794768] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.143 Running I/O for 10 seconds... 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=103 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 103 -ge 100 ']' 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.143 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.143 [2024-12-14 02:49:07.214825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.214859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.214867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.214873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.214879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.214885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.214891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.214896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.214902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.214908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.214913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.214919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.214930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.214935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.214941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.214947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.214953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.214959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.214965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.214971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.214976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.214982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.214988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.215000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.215006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedbe0 is same with the state(6) to be set 00:07:52.143 [2024-12-14 02:49:07.215116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.143 [2024-12-14 02:49:07.215147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.143 [2024-12-14 02:49:07.215162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.143 [2024-12-14 02:49:07.215169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.143 [2024-12-14 02:49:07.215178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.144 [2024-12-14 02:49:07.215533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.144 [2024-12-14 02:49:07.215539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.145 [2024-12-14 02:49:07.215898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.145 [2024-12-14 02:49:07.215906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.146 [2024-12-14 02:49:07.215912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.146 [2024-12-14 02:49:07.215920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.146 [2024-12-14 02:49:07.215926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.146 [2024-12-14 02:49:07.215934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.146 [2024-12-14 02:49:07.215940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.146 [2024-12-14 02:49:07.215947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.146 [2024-12-14 02:49:07.215954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.146 [2024-12-14 02:49:07.215961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.146 [2024-12-14 02:49:07.215968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.146 [2024-12-14 02:49:07.215976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.146 [2024-12-14 02:49:07.215982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.146 [2024-12-14 02:49:07.215990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.146 [2024-12-14 02:49:07.215996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.146 [2024-12-14 02:49:07.216004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.146 [2024-12-14 02:49:07.216010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.146 [2024-12-14 02:49:07.216017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.146 [2024-12-14 02:49:07.216023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.146 [2024-12-14 02:49:07.216031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.146 [2024-12-14 02:49:07.216037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.146 [2024-12-14 02:49:07.216044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.146 [2024-12-14 02:49:07.216052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.146 [2024-12-14 02:49:07.216060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.146 [2024-12-14 02:49:07.216066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.146 [2024-12-14 02:49:07.216092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:07:52.146 [2024-12-14 02:49:07.217004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:52.146 task offset: 24576 on job bdev=Nvme0n1 fails 00:07:52.146 00:07:52.146 Latency(us) 00:07:52.146 [2024-12-14T01:49:07.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.146 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:52.146 Job: Nvme0n1 ended in about 0.11 seconds with error 00:07:52.146 Verification LBA range: start 0x0 length 0x400 00:07:52.146 Nvme0n1 : 0.11 1769.32 110.58 589.77 0.00 25007.94 2278.16 26963.38 00:07:52.146 [2024-12-14T01:49:07.279Z] =================================================================================================================== 00:07:52.146 [2024-12-14T01:49:07.279Z] Total : 1769.32 110.58 589.77 0.00 25007.94 2278.16 26963.38 00:07:52.146 [2024-12-14 02:49:07.219348] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.146 [2024-12-14 02:49:07.219368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85a490 (9): Bad file descriptor 00:07:52.146 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.146 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:52.146 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.146 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.146 [2024-12-14 02:49:07.222534] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:52.146 [2024-12-14 02:49:07.222609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:52.146 [2024-12-14 02:49:07.222631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.146 [2024-12-14 02:49:07.222643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:52.146 [2024-12-14 02:49:07.222650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:52.146 [2024-12-14 02:49:07.222656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:52.146 [2024-12-14 02:49:07.222662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x85a490 00:07:52.146 [2024-12-14 02:49:07.222680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85a490 (9): Bad file descriptor 00:07:52.146 [2024-12-14 02:49:07.222691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:07:52.146 [2024-12-14 02:49:07.222698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:07:52.146 [2024-12-14 02:49:07.222705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:07:52.146 [2024-12-14 02:49:07.222713] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:07:52.146 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.146 02:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:53.528 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 142254 00:07:53.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (142254) - No such process 00:07:53.528 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:53.528 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:53.528 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:53.528 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:53.528 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:53.528 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:53.528 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:53.528 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:53.528 { 00:07:53.528 "params": { 00:07:53.528 "name": "Nvme$subsystem", 00:07:53.528 "trtype": "$TEST_TRANSPORT", 00:07:53.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.528 "adrfam": "ipv4", 00:07:53.528 "trsvcid": "$NVMF_PORT", 00:07:53.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.528 "hdgst": ${hdgst:-false}, 00:07:53.528 "ddgst": ${ddgst:-false} 00:07:53.528 }, 00:07:53.528 "method": "bdev_nvme_attach_controller" 00:07:53.528 } 00:07:53.528 EOF 00:07:53.528 )") 00:07:53.528 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:53.528 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:53.528 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:53.528 02:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:53.528 "params": { 00:07:53.528 "name": "Nvme0", 00:07:53.528 "trtype": "tcp", 00:07:53.528 "traddr": "10.0.0.2", 00:07:53.528 "adrfam": "ipv4", 00:07:53.528 "trsvcid": "4420", 00:07:53.528 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:53.528 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:53.528 "hdgst": false, 00:07:53.528 "ddgst": false 00:07:53.528 }, 00:07:53.528 "method": "bdev_nvme_attach_controller" 00:07:53.528 }' 00:07:53.528 [2024-12-14 02:49:08.285097] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:53.528 [2024-12-14 02:49:08.285140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142495 ] 00:07:53.528 [2024-12-14 02:49:08.360046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.528 [2024-12-14 02:49:08.381009] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.528 Running I/O for 1 seconds... 00:07:54.468 2048.00 IOPS, 128.00 MiB/s 00:07:54.468 Latency(us) 00:07:54.468 [2024-12-14T01:49:09.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.468 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:54.468 Verification LBA range: start 0x0 length 0x400 00:07:54.468 Nvme0n1 : 1.02 2065.32 129.08 0.00 0.00 30507.71 5867.03 26963.38 00:07:54.468 [2024-12-14T01:49:09.601Z] =================================================================================================================== 00:07:54.468 [2024-12-14T01:49:09.601Z] Total : 2065.32 129.08 0.00 0.00 30507.71 5867.03 26963.38 00:07:54.727 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:54.727 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:54.727 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:54.727 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:54.727 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:54.728 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:54.728 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:54.728 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:54.728 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:54.728 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:54.728 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:54.728 rmmod nvme_tcp 00:07:54.728 rmmod nvme_fabrics 00:07:54.728 rmmod nvme_keyring 00:07:54.728 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:54.728 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:54.728 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:54.728 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 142142 ']' 00:07:54.728 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 142142 00:07:54.728 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 142142 ']' 00:07:54.728 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 142142 00:07:54.728 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:54.728 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.728 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 142142 00:07:54.987 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:54.987 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:54.987 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 142142' 00:07:54.987 killing process with pid 142142 00:07:54.987 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 142142 00:07:54.988 02:49:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 142142 00:07:54.988 [2024-12-14 02:49:10.015459] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:54.988 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:54.988 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:54.988 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:54.988 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:54.988 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:54.988 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:54.988 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:54.988 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:54.988 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:54.988 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.988 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.988 02:49:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.525 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:57.525 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:57.525 00:07:57.525 real 0m12.131s 00:07:57.525 user 0m18.170s 00:07:57.525 sys 0m5.464s 00:07:57.525 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.525 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.525 ************************************ 00:07:57.525 END TEST nvmf_host_management 00:07:57.525 ************************************ 00:07:57.525 02:49:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:57.525 02:49:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:57.525 02:49:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.525 02:49:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:57.525 ************************************ 00:07:57.525 START TEST nvmf_lvol 00:07:57.525 ************************************ 00:07:57.525 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:57.525 * Looking for test storage... 00:07:57.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.525 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:57.525 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:57.525 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:57.525 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:57.525 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.525 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.525 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.525 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.525 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.525 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.525 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.525 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:57.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.526 --rc genhtml_branch_coverage=1 00:07:57.526 --rc genhtml_function_coverage=1 00:07:57.526 --rc genhtml_legend=1 00:07:57.526 --rc geninfo_all_blocks=1 00:07:57.526 --rc geninfo_unexecuted_blocks=1 00:07:57.526 00:07:57.526 ' 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:57.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.526 --rc genhtml_branch_coverage=1 00:07:57.526 --rc genhtml_function_coverage=1 00:07:57.526 --rc genhtml_legend=1 00:07:57.526 --rc geninfo_all_blocks=1 00:07:57.526 --rc geninfo_unexecuted_blocks=1 00:07:57.526 00:07:57.526 ' 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:57.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.526 --rc genhtml_branch_coverage=1 00:07:57.526 --rc genhtml_function_coverage=1 00:07:57.526 --rc genhtml_legend=1 00:07:57.526 --rc geninfo_all_blocks=1 00:07:57.526 --rc geninfo_unexecuted_blocks=1 00:07:57.526 00:07:57.526 ' 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:57.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.526 --rc genhtml_branch_coverage=1 00:07:57.526 --rc genhtml_function_coverage=1 00:07:57.526 --rc genhtml_legend=1 00:07:57.526 --rc geninfo_all_blocks=1 00:07:57.526 --rc geninfo_unexecuted_blocks=1 00:07:57.526 00:07:57.526 ' 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:57.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:57.526 02:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:04.103 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:04.103 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:04.103 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:04.103 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:04.103 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:04.103 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:04.103 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:04.103 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:04.103 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:04.104 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:04.104 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:04.104 Found net devices under 0000:af:00.0: cvl_0_0 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:04.104 Found net devices under 0000:af:00.1: cvl_0_1 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:04.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:08:04.104 00:08:04.104 --- 10.0.0.2 ping statistics --- 00:08:04.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.104 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:08:04.104 00:08:04.104 --- 10.0.0.1 ping statistics --- 00:08:04.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.104 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=146232 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 146232 00:08:04.104 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:04.105 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 146232 ']' 00:08:04.105 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.105 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.105 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.105 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.105 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:04.105 [2024-12-14 02:49:18.473442] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:04.105 [2024-12-14 02:49:18.473488] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.105 [2024-12-14 02:49:18.550012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:04.105 [2024-12-14 02:49:18.572970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.105 [2024-12-14 02:49:18.573005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.105 [2024-12-14 02:49:18.573012] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.105 [2024-12-14 02:49:18.573018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.105 [2024-12-14 02:49:18.573023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.105 [2024-12-14 02:49:18.574213] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.105 [2024-12-14 02:49:18.574341] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.105 [2024-12-14 02:49:18.574341] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.105 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.105 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:04.105 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:04.105 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:04.105 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:04.105 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.105 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:04.105 [2024-12-14 02:49:18.870727] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.105 02:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:04.105 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:04.105 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:04.364 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:04.364 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:04.624 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:04.624 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=21f706ff-e004-43f7-93d0-321d7dd26b20 00:08:04.624 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 21f706ff-e004-43f7-93d0-321d7dd26b20 lvol 20 00:08:04.883 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=9aa3f01f-ab47-465f-9d64-c28d39eddf42 00:08:04.883 02:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:05.142 02:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9aa3f01f-ab47-465f-9d64-c28d39eddf42 00:08:05.401 02:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:05.401 [2024-12-14 02:49:20.524195] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.660 02:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:05.660 02:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=146684 00:08:05.660 02:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:05.660 02:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:07.039 02:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 9aa3f01f-ab47-465f-9d64-c28d39eddf42 MY_SNAPSHOT 00:08:07.039 02:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c216123b-6c4e-4263-9744-dafc3e0f0c76 00:08:07.039 02:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 9aa3f01f-ab47-465f-9d64-c28d39eddf42 30 00:08:07.298 02:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c216123b-6c4e-4263-9744-dafc3e0f0c76 MY_CLONE 00:08:07.557 02:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=901f11e7-c8d5-45fd-8a72-98aa53ab1044 00:08:07.557 02:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 901f11e7-c8d5-45fd-8a72-98aa53ab1044 00:08:08.126 02:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 146684 00:08:16.251 Initializing NVMe Controllers 00:08:16.251 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:16.251 Controller IO queue size 128, less than required. 00:08:16.251 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:16.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:16.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:16.251 Initialization complete. Launching workers. 00:08:16.251 ======================================================== 00:08:16.251 Latency(us) 00:08:16.251 Device Information : IOPS MiB/s Average min max 00:08:16.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12355.90 48.27 10363.53 1537.63 96916.50 00:08:16.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12199.10 47.65 10491.17 3205.55 42166.51 00:08:16.251 ======================================================== 00:08:16.251 Total : 24555.00 95.92 10426.94 1537.63 96916.50 00:08:16.251 00:08:16.251 02:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:16.510 02:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9aa3f01f-ab47-465f-9d64-c28d39eddf42 00:08:16.769 02:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 21f706ff-e004-43f7-93d0-321d7dd26b20 00:08:17.029 02:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:17.029 02:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:17.029 02:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:17.029 02:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:17.029 02:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:17.029 02:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:17.029 02:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:17.029 02:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:17.029 02:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:17.029 rmmod nvme_tcp 00:08:17.029 rmmod nvme_fabrics 00:08:17.029 rmmod nvme_keyring 00:08:17.029 02:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:17.029 02:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:17.029 02:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:17.029 02:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 146232 ']' 00:08:17.029 02:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 146232 00:08:17.029 02:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 146232 ']' 00:08:17.029 02:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 146232 00:08:17.029 02:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:17.029 02:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.029 02:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 146232 00:08:17.029 02:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.029 02:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.029 02:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 146232' 00:08:17.029 killing process with pid 146232 00:08:17.029 02:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 146232 00:08:17.029 02:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 146232 00:08:17.289 02:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:17.289 02:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:17.289 02:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:17.289 02:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:17.289 02:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:17.289 02:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:17.289 02:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:17.289 02:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:17.289 02:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:17.289 02:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.289 02:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.289 02:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.196 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:19.196 00:08:19.196 real 0m22.121s 00:08:19.196 user 1m3.747s 00:08:19.196 sys 0m7.622s 00:08:19.196 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.196 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:19.196 ************************************ 00:08:19.196 END TEST nvmf_lvol 00:08:19.196 ************************************ 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:19.456 ************************************ 00:08:19.456 START TEST nvmf_lvs_grow 00:08:19.456 ************************************ 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:19.456 * Looking for test storage... 00:08:19.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:19.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.456 --rc genhtml_branch_coverage=1 00:08:19.456 --rc genhtml_function_coverage=1 00:08:19.456 --rc genhtml_legend=1 00:08:19.456 --rc geninfo_all_blocks=1 00:08:19.456 --rc geninfo_unexecuted_blocks=1 00:08:19.456 00:08:19.456 ' 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:19.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.456 --rc genhtml_branch_coverage=1 00:08:19.456 --rc genhtml_function_coverage=1 00:08:19.456 --rc genhtml_legend=1 00:08:19.456 --rc geninfo_all_blocks=1 00:08:19.456 --rc geninfo_unexecuted_blocks=1 00:08:19.456 00:08:19.456 ' 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:19.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.456 --rc genhtml_branch_coverage=1 00:08:19.456 --rc genhtml_function_coverage=1 00:08:19.456 --rc genhtml_legend=1 00:08:19.456 --rc geninfo_all_blocks=1 00:08:19.456 --rc geninfo_unexecuted_blocks=1 00:08:19.456 00:08:19.456 ' 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:19.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.456 --rc genhtml_branch_coverage=1 00:08:19.456 --rc genhtml_function_coverage=1 00:08:19.456 --rc genhtml_legend=1 00:08:19.456 --rc geninfo_all_blocks=1 00:08:19.456 --rc geninfo_unexecuted_blocks=1 00:08:19.456 00:08:19.456 ' 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.456 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:19.716 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.716 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.716 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.716 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:19.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:19.717 02:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:26.290 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:26.290 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.290 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:26.291 Found net devices under 0000:af:00.0: cvl_0_0 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:26.291 Found net devices under 0000:af:00.1: cvl_0_1 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:26.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:08:26.291 00:08:26.291 --- 10.0.0.2 ping statistics --- 00:08:26.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.291 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:26.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:08:26.291 00:08:26.291 --- 10.0.0.1 ping statistics --- 00:08:26.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.291 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=152158 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 152158 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 152158 ']' 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.291 [2024-12-14 02:49:40.666604] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:26.291 [2024-12-14 02:49:40.666649] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.291 [2024-12-14 02:49:40.742834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.291 [2024-12-14 02:49:40.764106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.291 [2024-12-14 02:49:40.764141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.291 [2024-12-14 02:49:40.764148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.291 [2024-12-14 02:49:40.764153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.291 [2024-12-14 02:49:40.764159] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.291 [2024-12-14 02:49:40.764634] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.291 02:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:26.291 [2024-12-14 02:49:41.060549] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.291 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:26.291 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.291 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.291 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.291 ************************************ 00:08:26.291 START TEST lvs_grow_clean 00:08:26.291 ************************************ 00:08:26.291 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:26.291 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:26.291 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:26.291 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:26.292 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:26.292 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:26.292 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:26.292 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:26.292 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:26.292 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:26.292 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:26.292 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:26.551 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ec4ca698-3fcd-454e-afa3-ff48742db470 00:08:26.551 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec4ca698-3fcd-454e-afa3-ff48742db470 00:08:26.551 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:26.810 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:26.810 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:26.810 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ec4ca698-3fcd-454e-afa3-ff48742db470 lvol 150 00:08:26.810 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=53c60665-ff81-4c1b-bb78-49867bb54eed 00:08:26.810 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:26.810 02:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:27.070 [2024-12-14 02:49:42.106188] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:27.070 [2024-12-14 02:49:42.106236] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:27.070 true 00:08:27.070 02:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec4ca698-3fcd-454e-afa3-ff48742db470 00:08:27.070 02:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:27.330 02:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:27.330 02:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:27.589 02:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 53c60665-ff81-4c1b-bb78-49867bb54eed 00:08:27.589 02:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:27.848 [2024-12-14 02:49:42.824319] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.848 02:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:28.108 02:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=152454 00:08:28.108 02:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:28.108 02:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 152454 /var/tmp/bdevperf.sock 00:08:28.108 02:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:28.108 02:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 152454 ']' 00:08:28.108 02:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:28.108 02:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.108 02:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:28.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:28.108 02:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.108 02:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:28.108 [2024-12-14 02:49:43.052615] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:28.108 [2024-12-14 02:49:43.052658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152454 ] 00:08:28.108 [2024-12-14 02:49:43.126755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.108 [2024-12-14 02:49:43.149165] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.108 02:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.108 02:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:28.108 02:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:28.677 Nvme0n1 00:08:28.677 02:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:28.935 [ 00:08:28.935 { 00:08:28.935 "name": "Nvme0n1", 00:08:28.935 "aliases": [ 00:08:28.935 "53c60665-ff81-4c1b-bb78-49867bb54eed" 00:08:28.935 ], 00:08:28.935 "product_name": "NVMe disk", 00:08:28.935 "block_size": 4096, 00:08:28.935 "num_blocks": 38912, 00:08:28.935 "uuid": "53c60665-ff81-4c1b-bb78-49867bb54eed", 00:08:28.935 "numa_id": 1, 00:08:28.935 "assigned_rate_limits": { 00:08:28.935 "rw_ios_per_sec": 0, 00:08:28.935 "rw_mbytes_per_sec": 0, 00:08:28.935 "r_mbytes_per_sec": 0, 00:08:28.935 "w_mbytes_per_sec": 0 00:08:28.935 }, 00:08:28.935 "claimed": false, 00:08:28.935 "zoned": false, 00:08:28.935 "supported_io_types": { 00:08:28.935 "read": true, 00:08:28.935 "write": true, 00:08:28.935 "unmap": true, 00:08:28.935 "flush": true, 00:08:28.935 "reset": true, 00:08:28.935 "nvme_admin": true, 00:08:28.935 "nvme_io": true, 00:08:28.935 "nvme_io_md": false, 00:08:28.935 "write_zeroes": true, 00:08:28.935 "zcopy": false, 00:08:28.935 "get_zone_info": false, 00:08:28.935 "zone_management": false, 00:08:28.935 "zone_append": false, 00:08:28.935 "compare": true, 00:08:28.935 "compare_and_write": true, 00:08:28.935 "abort": true, 00:08:28.935 "seek_hole": false, 00:08:28.935 "seek_data": false, 00:08:28.935 "copy": true, 00:08:28.935 "nvme_iov_md": false 00:08:28.935 }, 00:08:28.935 "memory_domains": [ 00:08:28.935 { 00:08:28.935 "dma_device_id": "system", 00:08:28.935 "dma_device_type": 1 00:08:28.935 } 00:08:28.935 ], 00:08:28.935 "driver_specific": { 00:08:28.935 "nvme": [ 00:08:28.935 { 00:08:28.935 "trid": { 00:08:28.935 "trtype": "TCP", 00:08:28.935 "adrfam": "IPv4", 00:08:28.935 "traddr": "10.0.0.2", 00:08:28.935 "trsvcid": "4420", 00:08:28.935 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:28.935 }, 00:08:28.935 "ctrlr_data": { 00:08:28.935 "cntlid": 1, 00:08:28.935 "vendor_id": "0x8086", 00:08:28.935 "model_number": "SPDK bdev Controller", 00:08:28.935 "serial_number": "SPDK0", 00:08:28.935 "firmware_revision": "25.01", 00:08:28.935 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:28.935 "oacs": { 00:08:28.935 "security": 0, 00:08:28.935 "format": 0, 00:08:28.935 "firmware": 0, 00:08:28.935 "ns_manage": 0 00:08:28.935 }, 00:08:28.935 "multi_ctrlr": true, 00:08:28.935 "ana_reporting": false 00:08:28.935 }, 00:08:28.935 "vs": { 00:08:28.935 "nvme_version": "1.3" 00:08:28.935 }, 00:08:28.935 "ns_data": { 00:08:28.935 "id": 1, 00:08:28.935 "can_share": true 00:08:28.935 } 00:08:28.935 } 00:08:28.935 ], 00:08:28.935 "mp_policy": "active_passive" 00:08:28.935 } 00:08:28.935 } 00:08:28.935 ] 00:08:28.935 02:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=152663 00:08:28.935 02:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:28.935 02:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:28.935 Running I/O for 10 seconds... 00:08:29.870 Latency(us) 00:08:29.870 [2024-12-14T01:49:45.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.870 Nvme0n1 : 1.00 23456.00 91.62 0.00 0.00 0.00 0.00 0.00 00:08:29.870 [2024-12-14T01:49:45.003Z] =================================================================================================================== 00:08:29.870 [2024-12-14T01:49:45.003Z] Total : 23456.00 91.62 0.00 0.00 0.00 0.00 0.00 00:08:29.870 00:08:30.807 02:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ec4ca698-3fcd-454e-afa3-ff48742db470 00:08:31.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.066 Nvme0n1 : 2.00 23582.50 92.12 0.00 0.00 0.00 0.00 0.00 00:08:31.066 [2024-12-14T01:49:46.199Z] =================================================================================================================== 00:08:31.066 [2024-12-14T01:49:46.199Z] Total : 23582.50 92.12 0.00 0.00 0.00 0.00 0.00 00:08:31.066 00:08:31.066 true 00:08:31.066 02:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec4ca698-3fcd-454e-afa3-ff48742db470 00:08:31.066 02:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:31.325 02:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:31.325 02:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:31.325 02:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 152663 00:08:31.895 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.895 Nvme0n1 : 3.00 23641.00 92.35 0.00 0.00 0.00 0.00 0.00 00:08:31.895 [2024-12-14T01:49:47.028Z] =================================================================================================================== 00:08:31.895 [2024-12-14T01:49:47.028Z] Total : 23641.00 92.35 0.00 0.00 0.00 0.00 0.00 00:08:31.895 00:08:32.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.832 Nvme0n1 : 4.00 23692.75 92.55 0.00 0.00 0.00 0.00 0.00 00:08:32.832 [2024-12-14T01:49:47.965Z] =================================================================================================================== 00:08:32.832 [2024-12-14T01:49:47.965Z] Total : 23692.75 92.55 0.00 0.00 0.00 0.00 0.00 00:08:32.832 00:08:34.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.211 Nvme0n1 : 5.00 23744.60 92.75 0.00 0.00 0.00 0.00 0.00 00:08:34.211 [2024-12-14T01:49:49.344Z] =================================================================================================================== 00:08:34.211 [2024-12-14T01:49:49.344Z] Total : 23744.60 92.75 0.00 0.00 0.00 0.00 0.00 00:08:34.211 00:08:35.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.148 Nvme0n1 : 6.00 23793.67 92.94 0.00 0.00 0.00 0.00 0.00 00:08:35.148 [2024-12-14T01:49:50.282Z] =================================================================================================================== 00:08:35.149 [2024-12-14T01:49:50.282Z] Total : 23793.67 92.94 0.00 0.00 0.00 0.00 0.00 00:08:35.149 00:08:36.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.085 Nvme0n1 : 7.00 23790.71 92.93 0.00 0.00 0.00 0.00 0.00 00:08:36.085 [2024-12-14T01:49:51.218Z] =================================================================================================================== 00:08:36.085 [2024-12-14T01:49:51.218Z] Total : 23790.71 92.93 0.00 0.00 0.00 0.00 0.00 00:08:36.085 00:08:37.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.023 Nvme0n1 : 8.00 23800.38 92.97 0.00 0.00 0.00 0.00 0.00 00:08:37.023 [2024-12-14T01:49:52.156Z] =================================================================================================================== 00:08:37.023 [2024-12-14T01:49:52.156Z] Total : 23800.38 92.97 0.00 0.00 0.00 0.00 0.00 00:08:37.023 00:08:37.961 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.961 Nvme0n1 : 9.00 23830.44 93.09 0.00 0.00 0.00 0.00 0.00 00:08:37.961 [2024-12-14T01:49:53.094Z] =================================================================================================================== 00:08:37.961 [2024-12-14T01:49:53.094Z] Total : 23830.44 93.09 0.00 0.00 0.00 0.00 0.00 00:08:37.961 00:08:38.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.899 Nvme0n1 : 10.00 23845.80 93.15 0.00 0.00 0.00 0.00 0.00 00:08:38.899 [2024-12-14T01:49:54.032Z] =================================================================================================================== 00:08:38.899 [2024-12-14T01:49:54.032Z] Total : 23845.80 93.15 0.00 0.00 0.00 0.00 0.00 00:08:38.899 00:08:38.899 00:08:38.899 Latency(us) 00:08:38.899 [2024-12-14T01:49:54.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.899 Nvme0n1 : 10.00 23847.56 93.15 0.00 0.00 5364.49 3136.37 11796.48 00:08:38.899 [2024-12-14T01:49:54.032Z] =================================================================================================================== 00:08:38.899 [2024-12-14T01:49:54.032Z] Total : 23847.56 93.15 0.00 0.00 5364.49 3136.37 11796.48 00:08:38.899 { 00:08:38.899 "results": [ 00:08:38.899 { 00:08:38.899 "job": "Nvme0n1", 00:08:38.899 "core_mask": "0x2", 00:08:38.899 "workload": "randwrite", 00:08:38.899 "status": "finished", 00:08:38.899 "queue_depth": 128, 00:08:38.899 "io_size": 4096, 00:08:38.899 "runtime": 10.004628, 00:08:38.899 "iops": 23847.56334768269, 00:08:38.899 "mibps": 93.15454432688551, 00:08:38.899 "io_failed": 0, 00:08:38.899 "io_timeout": 0, 00:08:38.899 "avg_latency_us": 5364.487865004652, 00:08:38.899 "min_latency_us": 3136.365714285714, 00:08:38.899 "max_latency_us": 11796.48 00:08:38.899 } 00:08:38.899 ], 00:08:38.899 "core_count": 1 00:08:38.899 } 00:08:38.899 02:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 152454 00:08:38.899 02:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 152454 ']' 00:08:38.899 02:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 152454 00:08:38.899 02:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:38.899 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.899 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 152454 00:08:39.159 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:39.159 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:39.159 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 152454' 00:08:39.159 killing process with pid 152454 00:08:39.159 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 152454 00:08:39.159 Received shutdown signal, test time was about 10.000000 seconds 00:08:39.159 00:08:39.159 Latency(us) 00:08:39.159 [2024-12-14T01:49:54.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.159 [2024-12-14T01:49:54.292Z] =================================================================================================================== 00:08:39.159 [2024-12-14T01:49:54.292Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:39.159 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 152454 00:08:39.159 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:39.418 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:39.678 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec4ca698-3fcd-454e-afa3-ff48742db470 00:08:39.678 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:39.678 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:39.678 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:39.678 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:39.938 [2024-12-14 02:49:54.943864] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:39.938 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec4ca698-3fcd-454e-afa3-ff48742db470 00:08:39.938 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:39.938 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec4ca698-3fcd-454e-afa3-ff48742db470 00:08:39.938 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.938 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.938 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.938 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.938 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.938 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.938 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.938 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:39.938 02:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec4ca698-3fcd-454e-afa3-ff48742db470 00:08:40.196 request: 00:08:40.196 { 00:08:40.196 "uuid": "ec4ca698-3fcd-454e-afa3-ff48742db470", 00:08:40.196 "method": "bdev_lvol_get_lvstores", 00:08:40.196 "req_id": 1 00:08:40.196 } 00:08:40.196 Got JSON-RPC error response 00:08:40.196 response: 00:08:40.196 { 00:08:40.196 "code": -19, 00:08:40.196 "message": "No such device" 00:08:40.196 } 00:08:40.196 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:40.196 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:40.196 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:40.196 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:40.196 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:40.455 aio_bdev 00:08:40.455 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 53c60665-ff81-4c1b-bb78-49867bb54eed 00:08:40.455 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=53c60665-ff81-4c1b-bb78-49867bb54eed 00:08:40.455 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.455 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:40.455 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.455 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.455 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:40.455 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 53c60665-ff81-4c1b-bb78-49867bb54eed -t 2000 00:08:40.715 [ 00:08:40.715 { 00:08:40.715 "name": "53c60665-ff81-4c1b-bb78-49867bb54eed", 00:08:40.715 "aliases": [ 00:08:40.715 "lvs/lvol" 00:08:40.715 ], 00:08:40.715 "product_name": "Logical Volume", 00:08:40.715 "block_size": 4096, 00:08:40.715 "num_blocks": 38912, 00:08:40.715 "uuid": "53c60665-ff81-4c1b-bb78-49867bb54eed", 00:08:40.715 "assigned_rate_limits": { 00:08:40.715 "rw_ios_per_sec": 0, 00:08:40.715 "rw_mbytes_per_sec": 0, 00:08:40.715 "r_mbytes_per_sec": 0, 00:08:40.715 "w_mbytes_per_sec": 0 00:08:40.715 }, 00:08:40.715 "claimed": false, 00:08:40.715 "zoned": false, 00:08:40.715 "supported_io_types": { 00:08:40.715 "read": true, 00:08:40.715 "write": true, 00:08:40.715 "unmap": true, 00:08:40.715 "flush": false, 00:08:40.715 "reset": true, 00:08:40.715 "nvme_admin": false, 00:08:40.715 "nvme_io": false, 00:08:40.715 "nvme_io_md": false, 00:08:40.715 "write_zeroes": true, 00:08:40.715 "zcopy": false, 00:08:40.715 "get_zone_info": false, 00:08:40.715 "zone_management": false, 00:08:40.715 "zone_append": false, 00:08:40.715 "compare": false, 00:08:40.715 "compare_and_write": false, 00:08:40.715 "abort": false, 00:08:40.715 "seek_hole": true, 00:08:40.715 "seek_data": true, 00:08:40.715 "copy": false, 00:08:40.715 "nvme_iov_md": false 00:08:40.715 }, 00:08:40.715 "driver_specific": { 00:08:40.715 "lvol": { 00:08:40.715 "lvol_store_uuid": "ec4ca698-3fcd-454e-afa3-ff48742db470", 00:08:40.715 "base_bdev": "aio_bdev", 00:08:40.715 "thin_provision": false, 00:08:40.715 "num_allocated_clusters": 38, 00:08:40.715 "snapshot": false, 00:08:40.715 "clone": false, 00:08:40.715 "esnap_clone": false 00:08:40.715 } 00:08:40.715 } 00:08:40.715 } 00:08:40.715 ] 00:08:40.715 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:40.715 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec4ca698-3fcd-454e-afa3-ff48742db470 00:08:40.715 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:40.975 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:40.975 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec4ca698-3fcd-454e-afa3-ff48742db470 00:08:40.975 02:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:40.975 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:40.975 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 53c60665-ff81-4c1b-bb78-49867bb54eed 00:08:41.234 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ec4ca698-3fcd-454e-afa3-ff48742db470 00:08:41.493 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:41.752 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:41.753 00:08:41.753 real 0m15.570s 00:08:41.753 user 0m15.127s 00:08:41.753 sys 0m1.488s 00:08:41.753 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.753 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:41.753 ************************************ 00:08:41.753 END TEST lvs_grow_clean 00:08:41.753 ************************************ 00:08:41.753 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:41.753 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:41.753 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.753 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:41.753 ************************************ 00:08:41.753 START TEST lvs_grow_dirty 00:08:41.753 ************************************ 00:08:41.753 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:41.753 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:41.753 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:41.753 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:41.753 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:41.753 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:41.753 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:41.753 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:41.753 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:41.753 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:42.012 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:42.012 02:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:42.272 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4a18d376-4a9d-4e4c-8c42-8e952a9259df 00:08:42.272 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a18d376-4a9d-4e4c-8c42-8e952a9259df 00:08:42.272 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:42.272 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:42.272 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:42.272 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4a18d376-4a9d-4e4c-8c42-8e952a9259df lvol 150 00:08:42.531 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f2626715-4fa8-4465-a286-ace6a4886141 00:08:42.531 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:42.531 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:42.790 [2024-12-14 02:49:57.724140] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:42.790 [2024-12-14 02:49:57.724187] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:42.790 true 00:08:42.790 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a18d376-4a9d-4e4c-8c42-8e952a9259df 00:08:42.790 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:42.790 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:42.790 02:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:43.050 02:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f2626715-4fa8-4465-a286-ace6a4886141 00:08:43.309 02:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:43.569 [2024-12-14 02:49:58.450322] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.569 02:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:43.569 02:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=155191 00:08:43.569 02:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:43.569 02:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:43.569 02:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 155191 /var/tmp/bdevperf.sock 00:08:43.569 02:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 155191 ']' 00:08:43.569 02:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:43.569 02:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.569 02:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:43.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:43.569 02:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.569 02:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:43.569 [2024-12-14 02:49:58.693250] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:43.569 [2024-12-14 02:49:58.693293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155191 ] 00:08:43.829 [2024-12-14 02:49:58.766823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.829 [2024-12-14 02:49:58.788765] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.829 02:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.829 02:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:43.829 02:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:44.088 Nvme0n1 00:08:44.088 02:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:44.347 [ 00:08:44.347 { 00:08:44.347 "name": "Nvme0n1", 00:08:44.347 "aliases": [ 00:08:44.347 "f2626715-4fa8-4465-a286-ace6a4886141" 00:08:44.347 ], 00:08:44.347 "product_name": "NVMe disk", 00:08:44.347 "block_size": 4096, 00:08:44.347 "num_blocks": 38912, 00:08:44.347 "uuid": "f2626715-4fa8-4465-a286-ace6a4886141", 00:08:44.347 "numa_id": 1, 00:08:44.347 "assigned_rate_limits": { 00:08:44.347 "rw_ios_per_sec": 0, 00:08:44.347 "rw_mbytes_per_sec": 0, 00:08:44.347 "r_mbytes_per_sec": 0, 00:08:44.347 "w_mbytes_per_sec": 0 00:08:44.347 }, 00:08:44.347 "claimed": false, 00:08:44.347 "zoned": false, 00:08:44.347 "supported_io_types": { 00:08:44.347 "read": true, 00:08:44.347 "write": true, 00:08:44.347 "unmap": true, 00:08:44.347 "flush": true, 00:08:44.347 "reset": true, 00:08:44.347 "nvme_admin": true, 00:08:44.347 "nvme_io": true, 00:08:44.347 "nvme_io_md": false, 00:08:44.347 "write_zeroes": true, 00:08:44.347 "zcopy": false, 00:08:44.347 "get_zone_info": false, 00:08:44.347 "zone_management": false, 00:08:44.347 "zone_append": false, 00:08:44.347 "compare": true, 00:08:44.347 "compare_and_write": true, 00:08:44.347 "abort": true, 00:08:44.347 "seek_hole": false, 00:08:44.347 "seek_data": false, 00:08:44.347 "copy": true, 00:08:44.348 "nvme_iov_md": false 00:08:44.348 }, 00:08:44.348 "memory_domains": [ 00:08:44.348 { 00:08:44.348 "dma_device_id": "system", 00:08:44.348 "dma_device_type": 1 00:08:44.348 } 00:08:44.348 ], 00:08:44.348 "driver_specific": { 00:08:44.348 "nvme": [ 00:08:44.348 { 00:08:44.348 "trid": { 00:08:44.348 "trtype": "TCP", 00:08:44.348 "adrfam": "IPv4", 00:08:44.348 "traddr": "10.0.0.2", 00:08:44.348 "trsvcid": "4420", 00:08:44.348 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:44.348 }, 00:08:44.348 "ctrlr_data": { 00:08:44.348 "cntlid": 1, 00:08:44.348 "vendor_id": "0x8086", 00:08:44.348 "model_number": "SPDK bdev Controller", 00:08:44.348 "serial_number": "SPDK0", 00:08:44.348 "firmware_revision": "25.01", 00:08:44.348 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:44.348 "oacs": { 00:08:44.348 "security": 0, 00:08:44.348 "format": 0, 00:08:44.348 "firmware": 0, 00:08:44.348 "ns_manage": 0 00:08:44.348 }, 00:08:44.348 "multi_ctrlr": true, 00:08:44.348 "ana_reporting": false 00:08:44.348 }, 00:08:44.348 "vs": { 00:08:44.348 "nvme_version": "1.3" 00:08:44.348 }, 00:08:44.348 "ns_data": { 00:08:44.348 "id": 1, 00:08:44.348 "can_share": true 00:08:44.348 } 00:08:44.348 } 00:08:44.348 ], 00:08:44.348 "mp_policy": "active_passive" 00:08:44.348 } 00:08:44.348 } 00:08:44.348 ] 00:08:44.348 02:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=155214 00:08:44.348 02:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:44.348 02:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:44.348 Running I/O for 10 seconds... 00:08:45.726 Latency(us) 00:08:45.726 [2024-12-14T01:50:00.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.726 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.726 Nvme0n1 : 1.00 23242.00 90.79 0.00 0.00 0.00 0.00 0.00 00:08:45.726 [2024-12-14T01:50:00.859Z] =================================================================================================================== 00:08:45.726 [2024-12-14T01:50:00.859Z] Total : 23242.00 90.79 0.00 0.00 0.00 0.00 0.00 00:08:45.726 00:08:46.294 02:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4a18d376-4a9d-4e4c-8c42-8e952a9259df 00:08:46.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.552 Nvme0n1 : 2.00 23370.00 91.29 0.00 0.00 0.00 0.00 0.00 00:08:46.552 [2024-12-14T01:50:01.685Z] =================================================================================================================== 00:08:46.552 [2024-12-14T01:50:01.685Z] Total : 23370.00 91.29 0.00 0.00 0.00 0.00 0.00 00:08:46.552 00:08:46.552 true 00:08:46.552 02:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a18d376-4a9d-4e4c-8c42-8e952a9259df 00:08:46.552 02:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:46.812 02:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:46.812 02:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:46.812 02:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 155214 00:08:47.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.381 Nvme0n1 : 3.00 23535.00 91.93 0.00 0.00 0.00 0.00 0.00 00:08:47.381 [2024-12-14T01:50:02.514Z] =================================================================================================================== 00:08:47.381 [2024-12-14T01:50:02.514Z] Total : 23535.00 91.93 0.00 0.00 0.00 0.00 0.00 00:08:47.381 00:08:48.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.759 Nvme0n1 : 4.00 23648.50 92.38 0.00 0.00 0.00 0.00 0.00 00:08:48.759 [2024-12-14T01:50:03.892Z] =================================================================================================================== 00:08:48.759 [2024-12-14T01:50:03.892Z] Total : 23648.50 92.38 0.00 0.00 0.00 0.00 0.00 00:08:48.759 00:08:49.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.697 Nvme0n1 : 5.00 23735.20 92.72 0.00 0.00 0.00 0.00 0.00 00:08:49.697 [2024-12-14T01:50:04.830Z] =================================================================================================================== 00:08:49.697 [2024-12-14T01:50:04.830Z] Total : 23735.20 92.72 0.00 0.00 0.00 0.00 0.00 00:08:49.697 00:08:50.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.633 Nvme0n1 : 6.00 23788.50 92.92 0.00 0.00 0.00 0.00 0.00 00:08:50.633 [2024-12-14T01:50:05.766Z] =================================================================================================================== 00:08:50.633 [2024-12-14T01:50:05.766Z] Total : 23788.50 92.92 0.00 0.00 0.00 0.00 0.00 00:08:50.633 00:08:51.571 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.571 Nvme0n1 : 7.00 23829.14 93.08 0.00 0.00 0.00 0.00 0.00 00:08:51.571 [2024-12-14T01:50:06.704Z] =================================================================================================================== 00:08:51.571 [2024-12-14T01:50:06.704Z] Total : 23829.14 93.08 0.00 0.00 0.00 0.00 0.00 00:08:51.571 00:08:52.509 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.509 Nvme0n1 : 8.00 23870.25 93.24 0.00 0.00 0.00 0.00 0.00 00:08:52.509 [2024-12-14T01:50:07.642Z] =================================================================================================================== 00:08:52.509 [2024-12-14T01:50:07.642Z] Total : 23870.25 93.24 0.00 0.00 0.00 0.00 0.00 00:08:52.509 00:08:53.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.447 Nvme0n1 : 9.00 23893.22 93.33 0.00 0.00 0.00 0.00 0.00 00:08:53.447 [2024-12-14T01:50:08.580Z] =================================================================================================================== 00:08:53.447 [2024-12-14T01:50:08.580Z] Total : 23893.22 93.33 0.00 0.00 0.00 0.00 0.00 00:08:53.447 00:08:54.385 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.385 Nvme0n1 : 10.00 23923.40 93.45 0.00 0.00 0.00 0.00 0.00 00:08:54.385 [2024-12-14T01:50:09.518Z] =================================================================================================================== 00:08:54.385 [2024-12-14T01:50:09.518Z] Total : 23923.40 93.45 0.00 0.00 0.00 0.00 0.00 00:08:54.385 00:08:54.385 00:08:54.385 Latency(us) 00:08:54.385 [2024-12-14T01:50:09.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.385 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.385 Nvme0n1 : 10.01 23922.91 93.45 0.00 0.00 5347.45 3183.18 14043.43 00:08:54.385 [2024-12-14T01:50:09.518Z] =================================================================================================================== 00:08:54.385 [2024-12-14T01:50:09.518Z] Total : 23922.91 93.45 0.00 0.00 5347.45 3183.18 14043.43 00:08:54.385 { 00:08:54.385 "results": [ 00:08:54.385 { 00:08:54.385 "job": "Nvme0n1", 00:08:54.385 "core_mask": "0x2", 00:08:54.385 "workload": "randwrite", 00:08:54.385 "status": "finished", 00:08:54.385 "queue_depth": 128, 00:08:54.385 "io_size": 4096, 00:08:54.385 "runtime": 10.005557, 00:08:54.385 "iops": 23922.906041112954, 00:08:54.385 "mibps": 93.44885172309748, 00:08:54.385 "io_failed": 0, 00:08:54.385 "io_timeout": 0, 00:08:54.385 "avg_latency_us": 5347.452681616726, 00:08:54.385 "min_latency_us": 3183.177142857143, 00:08:54.385 "max_latency_us": 14043.42857142857 00:08:54.385 } 00:08:54.385 ], 00:08:54.385 "core_count": 1 00:08:54.385 } 00:08:54.385 02:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 155191 00:08:54.385 02:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 155191 ']' 00:08:54.385 02:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 155191 00:08:54.385 02:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:54.385 02:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.385 02:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 155191 00:08:54.645 02:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:54.645 02:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:54.645 02:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 155191' 00:08:54.645 killing process with pid 155191 00:08:54.645 02:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 155191 00:08:54.645 Received shutdown signal, test time was about 10.000000 seconds 00:08:54.645 00:08:54.645 Latency(us) 00:08:54.645 [2024-12-14T01:50:09.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.645 [2024-12-14T01:50:09.778Z] =================================================================================================================== 00:08:54.645 [2024-12-14T01:50:09.778Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:54.645 02:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 155191 00:08:54.645 02:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:54.905 02:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:55.164 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a18d376-4a9d-4e4c-8c42-8e952a9259df 00:08:55.164 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:55.164 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:55.164 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:55.164 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 152158 00:08:55.164 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 152158 00:08:55.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 152158 Killed "${NVMF_APP[@]}" "$@" 00:08:55.423 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:55.423 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:55.424 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:55.424 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:55.424 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:55.424 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=157092 00:08:55.424 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 157092 00:08:55.424 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:55.424 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 157092 ']' 00:08:55.424 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.424 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.424 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.424 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.424 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:55.424 [2024-12-14 02:50:10.369813] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:55.424 [2024-12-14 02:50:10.369862] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.424 [2024-12-14 02:50:10.449375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.424 [2024-12-14 02:50:10.469854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.424 [2024-12-14 02:50:10.469888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.424 [2024-12-14 02:50:10.469894] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.424 [2024-12-14 02:50:10.469900] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.424 [2024-12-14 02:50:10.469905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.424 [2024-12-14 02:50:10.470383] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.683 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.683 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:55.683 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:55.683 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:55.683 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:55.683 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.683 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:55.683 [2024-12-14 02:50:10.778774] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:55.683 [2024-12-14 02:50:10.778874] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:55.683 [2024-12-14 02:50:10.778899] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:55.683 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:55.683 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f2626715-4fa8-4465-a286-ace6a4886141 00:08:55.683 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f2626715-4fa8-4465-a286-ace6a4886141 00:08:55.683 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:55.683 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:55.683 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:55.683 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:55.683 02:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:55.942 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f2626715-4fa8-4465-a286-ace6a4886141 -t 2000 00:08:56.202 [ 00:08:56.202 { 00:08:56.202 "name": "f2626715-4fa8-4465-a286-ace6a4886141", 00:08:56.202 "aliases": [ 00:08:56.202 "lvs/lvol" 00:08:56.202 ], 00:08:56.202 "product_name": "Logical Volume", 00:08:56.202 "block_size": 4096, 00:08:56.202 "num_blocks": 38912, 00:08:56.202 "uuid": "f2626715-4fa8-4465-a286-ace6a4886141", 00:08:56.202 "assigned_rate_limits": { 00:08:56.202 "rw_ios_per_sec": 0, 00:08:56.202 "rw_mbytes_per_sec": 0, 00:08:56.202 "r_mbytes_per_sec": 0, 00:08:56.202 "w_mbytes_per_sec": 0 00:08:56.202 }, 00:08:56.202 "claimed": false, 00:08:56.202 "zoned": false, 00:08:56.202 "supported_io_types": { 00:08:56.202 "read": true, 00:08:56.202 "write": true, 00:08:56.202 "unmap": true, 00:08:56.202 "flush": false, 00:08:56.202 "reset": true, 00:08:56.202 "nvme_admin": false, 00:08:56.202 "nvme_io": false, 00:08:56.202 "nvme_io_md": false, 00:08:56.202 "write_zeroes": true, 00:08:56.202 "zcopy": false, 00:08:56.202 "get_zone_info": false, 00:08:56.202 "zone_management": false, 00:08:56.202 "zone_append": false, 00:08:56.202 "compare": false, 00:08:56.202 "compare_and_write": false, 00:08:56.202 "abort": false, 00:08:56.202 "seek_hole": true, 00:08:56.202 "seek_data": true, 00:08:56.202 "copy": false, 00:08:56.202 "nvme_iov_md": false 00:08:56.202 }, 00:08:56.202 "driver_specific": { 00:08:56.202 "lvol": { 00:08:56.202 "lvol_store_uuid": "4a18d376-4a9d-4e4c-8c42-8e952a9259df", 00:08:56.202 "base_bdev": "aio_bdev", 00:08:56.202 "thin_provision": false, 00:08:56.202 "num_allocated_clusters": 38, 00:08:56.202 "snapshot": false, 00:08:56.202 "clone": false, 00:08:56.202 "esnap_clone": false 00:08:56.202 } 00:08:56.202 } 00:08:56.202 } 00:08:56.202 ] 00:08:56.202 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:56.202 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a18d376-4a9d-4e4c-8c42-8e952a9259df 00:08:56.202 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:56.461 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:56.461 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a18d376-4a9d-4e4c-8c42-8e952a9259df 00:08:56.461 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:56.461 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:56.461 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:56.720 [2024-12-14 02:50:11.743860] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:56.720 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a18d376-4a9d-4e4c-8c42-8e952a9259df 00:08:56.720 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:56.720 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a18d376-4a9d-4e4c-8c42-8e952a9259df 00:08:56.720 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:56.720 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.720 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:56.720 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.720 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:56.720 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.720 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:56.720 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:56.720 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a18d376-4a9d-4e4c-8c42-8e952a9259df 00:08:56.980 request: 00:08:56.980 { 00:08:56.980 "uuid": "4a18d376-4a9d-4e4c-8c42-8e952a9259df", 00:08:56.980 "method": "bdev_lvol_get_lvstores", 00:08:56.980 "req_id": 1 00:08:56.980 } 00:08:56.980 Got JSON-RPC error response 00:08:56.980 response: 00:08:56.980 { 00:08:56.980 "code": -19, 00:08:56.980 "message": "No such device" 00:08:56.980 } 00:08:56.980 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:56.980 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:56.980 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:56.980 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:56.980 02:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:57.239 aio_bdev 00:08:57.239 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f2626715-4fa8-4465-a286-ace6a4886141 00:08:57.239 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f2626715-4fa8-4465-a286-ace6a4886141 00:08:57.239 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.239 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:57.239 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.239 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.239 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:57.239 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f2626715-4fa8-4465-a286-ace6a4886141 -t 2000 00:08:57.500 [ 00:08:57.500 { 00:08:57.500 "name": "f2626715-4fa8-4465-a286-ace6a4886141", 00:08:57.500 "aliases": [ 00:08:57.500 "lvs/lvol" 00:08:57.500 ], 00:08:57.500 "product_name": "Logical Volume", 00:08:57.500 "block_size": 4096, 00:08:57.500 "num_blocks": 38912, 00:08:57.500 "uuid": "f2626715-4fa8-4465-a286-ace6a4886141", 00:08:57.500 "assigned_rate_limits": { 00:08:57.500 "rw_ios_per_sec": 0, 00:08:57.500 "rw_mbytes_per_sec": 0, 00:08:57.500 "r_mbytes_per_sec": 0, 00:08:57.500 "w_mbytes_per_sec": 0 00:08:57.500 }, 00:08:57.500 "claimed": false, 00:08:57.500 "zoned": false, 00:08:57.500 "supported_io_types": { 00:08:57.500 "read": true, 00:08:57.500 "write": true, 00:08:57.500 "unmap": true, 00:08:57.500 "flush": false, 00:08:57.500 "reset": true, 00:08:57.500 "nvme_admin": false, 00:08:57.500 "nvme_io": false, 00:08:57.500 "nvme_io_md": false, 00:08:57.500 "write_zeroes": true, 00:08:57.500 "zcopy": false, 00:08:57.500 "get_zone_info": false, 00:08:57.500 "zone_management": false, 00:08:57.500 "zone_append": false, 00:08:57.500 "compare": false, 00:08:57.500 "compare_and_write": false, 00:08:57.500 "abort": false, 00:08:57.500 "seek_hole": true, 00:08:57.500 "seek_data": true, 00:08:57.500 "copy": false, 00:08:57.500 "nvme_iov_md": false 00:08:57.500 }, 00:08:57.500 "driver_specific": { 00:08:57.500 "lvol": { 00:08:57.500 "lvol_store_uuid": "4a18d376-4a9d-4e4c-8c42-8e952a9259df", 00:08:57.500 "base_bdev": "aio_bdev", 00:08:57.500 "thin_provision": false, 00:08:57.500 "num_allocated_clusters": 38, 00:08:57.500 "snapshot": false, 00:08:57.500 "clone": false, 00:08:57.500 "esnap_clone": false 00:08:57.500 } 00:08:57.500 } 00:08:57.500 } 00:08:57.500 ] 00:08:57.500 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:57.500 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:57.500 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a18d376-4a9d-4e4c-8c42-8e952a9259df 00:08:57.759 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:57.759 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a18d376-4a9d-4e4c-8c42-8e952a9259df 00:08:57.759 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:58.019 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:58.019 02:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f2626715-4fa8-4465-a286-ace6a4886141 00:08:58.019 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4a18d376-4a9d-4e4c-8c42-8e952a9259df 00:08:58.278 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:58.537 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:58.537 00:08:58.537 real 0m16.788s 00:08:58.537 user 0m43.350s 00:08:58.537 sys 0m3.856s 00:08:58.537 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.537 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:58.537 ************************************ 00:08:58.537 END TEST lvs_grow_dirty 00:08:58.537 ************************************ 00:08:58.537 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:58.537 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:58.537 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:58.537 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:58.537 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:58.537 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:58.537 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:58.537 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:58.537 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:58.537 nvmf_trace.0 00:08:58.537 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:58.537 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:58.537 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:58.537 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:58.537 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:58.537 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:58.537 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:58.537 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:58.537 rmmod nvme_tcp 00:08:58.537 rmmod nvme_fabrics 00:08:58.797 rmmod nvme_keyring 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 157092 ']' 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 157092 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 157092 ']' 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 157092 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 157092 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 157092' 00:08:58.797 killing process with pid 157092 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 157092 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 157092 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.797 02:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.334 02:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:01.334 00:09:01.334 real 0m41.600s 00:09:01.334 user 1m4.074s 00:09:01.334 sys 0m10.312s 00:09:01.334 02:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.334 02:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:01.334 ************************************ 00:09:01.334 END TEST nvmf_lvs_grow 00:09:01.334 ************************************ 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.334 ************************************ 00:09:01.334 START TEST nvmf_bdev_io_wait 00:09:01.334 ************************************ 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:01.334 * Looking for test storage... 00:09:01.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:01.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.334 --rc genhtml_branch_coverage=1 00:09:01.334 --rc genhtml_function_coverage=1 00:09:01.334 --rc genhtml_legend=1 00:09:01.334 --rc geninfo_all_blocks=1 00:09:01.334 --rc geninfo_unexecuted_blocks=1 00:09:01.334 00:09:01.334 ' 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:01.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.334 --rc genhtml_branch_coverage=1 00:09:01.334 --rc genhtml_function_coverage=1 00:09:01.334 --rc genhtml_legend=1 00:09:01.334 --rc geninfo_all_blocks=1 00:09:01.334 --rc geninfo_unexecuted_blocks=1 00:09:01.334 00:09:01.334 ' 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:01.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.334 --rc genhtml_branch_coverage=1 00:09:01.334 --rc genhtml_function_coverage=1 00:09:01.334 --rc genhtml_legend=1 00:09:01.334 --rc geninfo_all_blocks=1 00:09:01.334 --rc geninfo_unexecuted_blocks=1 00:09:01.334 00:09:01.334 ' 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:01.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.334 --rc genhtml_branch_coverage=1 00:09:01.334 --rc genhtml_function_coverage=1 00:09:01.334 --rc genhtml_legend=1 00:09:01.334 --rc geninfo_all_blocks=1 00:09:01.334 --rc geninfo_unexecuted_blocks=1 00:09:01.334 00:09:01.334 ' 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.334 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:01.335 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:01.335 02:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:07.911 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:07.911 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:07.911 Found net devices under 0000:af:00.0: cvl_0_0 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:07.911 Found net devices under 0000:af:00.1: cvl_0_1 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:07.911 02:50:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:07.911 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:07.911 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:07.911 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:07.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:07.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:09:07.912 00:09:07.912 --- 10.0.0.2 ping statistics --- 00:09:07.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.912 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:07.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:07.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:09:07.912 00:09:07.912 --- 10.0.0.1 ping statistics --- 00:09:07.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.912 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=161209 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 161209 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 161209 ']' 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.912 [2024-12-14 02:50:22.403368] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:07.912 [2024-12-14 02:50:22.403415] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.912 [2024-12-14 02:50:22.482146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:07.912 [2024-12-14 02:50:22.505016] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.912 [2024-12-14 02:50:22.505054] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.912 [2024-12-14 02:50:22.505061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.912 [2024-12-14 02:50:22.505067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.912 [2024-12-14 02:50:22.505073] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.912 [2024-12-14 02:50:22.506305] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.912 [2024-12-14 02:50:22.506430] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.912 [2024-12-14 02:50:22.506462] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.912 [2024-12-14 02:50:22.506462] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.912 [2024-12-14 02:50:22.670525] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.912 Malloc0 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.912 [2024-12-14 02:50:22.717769] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=161300 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=161303 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:07.912 { 00:09:07.912 "params": { 00:09:07.912 "name": "Nvme$subsystem", 00:09:07.912 "trtype": "$TEST_TRANSPORT", 00:09:07.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:07.912 "adrfam": "ipv4", 00:09:07.912 "trsvcid": "$NVMF_PORT", 00:09:07.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:07.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:07.912 "hdgst": ${hdgst:-false}, 00:09:07.912 "ddgst": ${ddgst:-false} 00:09:07.912 }, 00:09:07.912 "method": "bdev_nvme_attach_controller" 00:09:07.912 } 00:09:07.912 EOF 00:09:07.912 )") 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=161306 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:07.912 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:07.913 { 00:09:07.913 "params": { 00:09:07.913 "name": "Nvme$subsystem", 00:09:07.913 "trtype": "$TEST_TRANSPORT", 00:09:07.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:07.913 "adrfam": "ipv4", 00:09:07.913 "trsvcid": "$NVMF_PORT", 00:09:07.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:07.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:07.913 "hdgst": ${hdgst:-false}, 00:09:07.913 "ddgst": ${ddgst:-false} 00:09:07.913 }, 00:09:07.913 "method": "bdev_nvme_attach_controller" 00:09:07.913 } 00:09:07.913 EOF 00:09:07.913 )") 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=161310 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:07.913 { 00:09:07.913 "params": { 00:09:07.913 "name": "Nvme$subsystem", 00:09:07.913 "trtype": "$TEST_TRANSPORT", 00:09:07.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:07.913 "adrfam": "ipv4", 00:09:07.913 "trsvcid": "$NVMF_PORT", 00:09:07.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:07.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:07.913 "hdgst": ${hdgst:-false}, 00:09:07.913 "ddgst": ${ddgst:-false} 00:09:07.913 }, 00:09:07.913 "method": "bdev_nvme_attach_controller" 00:09:07.913 } 00:09:07.913 EOF 00:09:07.913 )") 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:07.913 { 00:09:07.913 "params": { 00:09:07.913 "name": "Nvme$subsystem", 00:09:07.913 "trtype": "$TEST_TRANSPORT", 00:09:07.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:07.913 "adrfam": "ipv4", 00:09:07.913 "trsvcid": "$NVMF_PORT", 00:09:07.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:07.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:07.913 "hdgst": ${hdgst:-false}, 00:09:07.913 "ddgst": ${ddgst:-false} 00:09:07.913 }, 00:09:07.913 "method": "bdev_nvme_attach_controller" 00:09:07.913 } 00:09:07.913 EOF 00:09:07.913 )") 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 161300 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:07.913 "params": { 00:09:07.913 "name": "Nvme1", 00:09:07.913 "trtype": "tcp", 00:09:07.913 "traddr": "10.0.0.2", 00:09:07.913 "adrfam": "ipv4", 00:09:07.913 "trsvcid": "4420", 00:09:07.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:07.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:07.913 "hdgst": false, 00:09:07.913 "ddgst": false 00:09:07.913 }, 00:09:07.913 "method": "bdev_nvme_attach_controller" 00:09:07.913 }' 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:07.913 "params": { 00:09:07.913 "name": "Nvme1", 00:09:07.913 "trtype": "tcp", 00:09:07.913 "traddr": "10.0.0.2", 00:09:07.913 "adrfam": "ipv4", 00:09:07.913 "trsvcid": "4420", 00:09:07.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:07.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:07.913 "hdgst": false, 00:09:07.913 "ddgst": false 00:09:07.913 }, 00:09:07.913 "method": "bdev_nvme_attach_controller" 00:09:07.913 }' 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:07.913 "params": { 00:09:07.913 "name": "Nvme1", 00:09:07.913 "trtype": "tcp", 00:09:07.913 "traddr": "10.0.0.2", 00:09:07.913 "adrfam": "ipv4", 00:09:07.913 "trsvcid": "4420", 00:09:07.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:07.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:07.913 "hdgst": false, 00:09:07.913 "ddgst": false 00:09:07.913 }, 00:09:07.913 "method": "bdev_nvme_attach_controller" 00:09:07.913 }' 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:07.913 02:50:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:07.913 "params": { 00:09:07.913 "name": "Nvme1", 00:09:07.913 "trtype": "tcp", 00:09:07.913 "traddr": "10.0.0.2", 00:09:07.913 "adrfam": "ipv4", 00:09:07.913 "trsvcid": "4420", 00:09:07.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:07.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:07.913 "hdgst": false, 00:09:07.913 "ddgst": false 00:09:07.913 }, 00:09:07.913 "method": "bdev_nvme_attach_controller" 00:09:07.913 }' 00:09:07.913 [2024-12-14 02:50:22.769931] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:07.913 [2024-12-14 02:50:22.769982] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:07.913 [2024-12-14 02:50:22.771088] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:07.913 [2024-12-14 02:50:22.771133] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:07.913 [2024-12-14 02:50:22.773684] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:07.913 [2024-12-14 02:50:22.773728] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:07.913 [2024-12-14 02:50:22.774940] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:07.913 [2024-12-14 02:50:22.774983] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:07.913 [2024-12-14 02:50:22.961058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.913 [2024-12-14 02:50:22.978369] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:09:08.173 [2024-12-14 02:50:23.044800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.173 [2024-12-14 02:50:23.062010] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:09:08.173 [2024-12-14 02:50:23.149092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.173 [2024-12-14 02:50:23.170572] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:09:08.173 [2024-12-14 02:50:23.206779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.173 [2024-12-14 02:50:23.222510] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:09:08.432 Running I/O for 1 seconds... 00:09:08.432 Running I/O for 1 seconds... 00:09:08.432 Running I/O for 1 seconds... 00:09:08.432 Running I/O for 1 seconds... 00:09:09.370 12272.00 IOPS, 47.94 MiB/s 00:09:09.370 Latency(us) 00:09:09.370 [2024-12-14T01:50:24.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.371 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:09.371 Nvme1n1 : 1.01 12330.43 48.17 0.00 0.00 10346.67 5180.46 15603.81 00:09:09.371 [2024-12-14T01:50:24.504Z] =================================================================================================================== 00:09:09.371 [2024-12-14T01:50:24.504Z] Total : 12330.43 48.17 0.00 0.00 10346.67 5180.46 15603.81 00:09:09.371 9414.00 IOPS, 36.77 MiB/s 00:09:09.371 Latency(us) 00:09:09.371 [2024-12-14T01:50:24.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.371 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:09.371 Nvme1n1 : 1.01 9468.66 36.99 0.00 0.00 13461.32 6896.88 22094.99 00:09:09.371 [2024-12-14T01:50:24.504Z] =================================================================================================================== 00:09:09.371 [2024-12-14T01:50:24.504Z] Total : 9468.66 36.99 0.00 0.00 13461.32 6896.88 22094.99 00:09:09.371 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 161303 00:09:09.371 12416.00 IOPS, 48.50 MiB/s [2024-12-14T01:50:24.504Z] 242624.00 IOPS, 947.75 MiB/s 00:09:09.371 Latency(us) 00:09:09.371 [2024-12-14T01:50:24.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.371 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:09.371 Nvme1n1 : 1.00 242261.82 946.34 0.00 0.00 525.38 220.40 1482.36 00:09:09.371 [2024-12-14T01:50:24.504Z] =================================================================================================================== 00:09:09.371 [2024-12-14T01:50:24.504Z] Total : 242261.82 946.34 0.00 0.00 525.38 220.40 1482.36 00:09:09.371 00:09:09.371 Latency(us) 00:09:09.371 [2024-12-14T01:50:24.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.371 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:09.371 Nvme1n1 : 1.00 12498.96 48.82 0.00 0.00 10216.74 3167.57 24092.28 00:09:09.371 [2024-12-14T01:50:24.504Z] =================================================================================================================== 00:09:09.371 [2024-12-14T01:50:24.504Z] Total : 12498.96 48.82 0.00 0.00 10216.74 3167.57 24092.28 00:09:09.371 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 161306 00:09:09.630 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 161310 00:09:09.630 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:09.630 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.630 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.630 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.631 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:09.631 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:09.631 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:09.631 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:09.631 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:09.631 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:09.631 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:09.631 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:09.631 rmmod nvme_tcp 00:09:09.631 rmmod nvme_fabrics 00:09:09.631 rmmod nvme_keyring 00:09:09.631 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:09.631 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:09.631 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:09.631 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 161209 ']' 00:09:09.631 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 161209 00:09:09.631 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 161209 ']' 00:09:09.631 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 161209 00:09:09.631 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:09.631 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.631 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 161209 00:09:09.631 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:09.631 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:09.631 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 161209' 00:09:09.631 killing process with pid 161209 00:09:09.631 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 161209 00:09:09.631 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 161209 00:09:09.890 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:09.891 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:09.891 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:09.891 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:09.891 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:09.891 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:09.891 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:09.891 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:09.891 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:09.891 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.891 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.891 02:50:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.427 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:12.427 00:09:12.427 real 0m10.893s 00:09:12.427 user 0m16.066s 00:09:12.427 sys 0m6.268s 00:09:12.427 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.427 02:50:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.427 ************************************ 00:09:12.427 END TEST nvmf_bdev_io_wait 00:09:12.427 ************************************ 00:09:12.427 02:50:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:12.427 02:50:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:12.427 02:50:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.427 02:50:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:12.427 ************************************ 00:09:12.427 START TEST nvmf_queue_depth 00:09:12.427 ************************************ 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:12.427 * Looking for test storage... 00:09:12.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:12.427 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:12.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.428 --rc genhtml_branch_coverage=1 00:09:12.428 --rc genhtml_function_coverage=1 00:09:12.428 --rc genhtml_legend=1 00:09:12.428 --rc geninfo_all_blocks=1 00:09:12.428 --rc geninfo_unexecuted_blocks=1 00:09:12.428 00:09:12.428 ' 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:12.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.428 --rc genhtml_branch_coverage=1 00:09:12.428 --rc genhtml_function_coverage=1 00:09:12.428 --rc genhtml_legend=1 00:09:12.428 --rc geninfo_all_blocks=1 00:09:12.428 --rc geninfo_unexecuted_blocks=1 00:09:12.428 00:09:12.428 ' 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:12.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.428 --rc genhtml_branch_coverage=1 00:09:12.428 --rc genhtml_function_coverage=1 00:09:12.428 --rc genhtml_legend=1 00:09:12.428 --rc geninfo_all_blocks=1 00:09:12.428 --rc geninfo_unexecuted_blocks=1 00:09:12.428 00:09:12.428 ' 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:12.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.428 --rc genhtml_branch_coverage=1 00:09:12.428 --rc genhtml_function_coverage=1 00:09:12.428 --rc genhtml_legend=1 00:09:12.428 --rc geninfo_all_blocks=1 00:09:12.428 --rc geninfo_unexecuted_blocks=1 00:09:12.428 00:09:12.428 ' 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:12.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:12.428 02:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:19.007 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:19.007 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:19.007 Found net devices under 0000:af:00.0: cvl_0_0 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:19.007 Found net devices under 0000:af:00.1: cvl_0_1 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:19.007 02:50:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:19.007 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:19.007 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:19.007 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:19.007 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:19.007 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:19.007 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:19.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:19.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:09:19.007 00:09:19.007 --- 10.0.0.2 ping statistics --- 00:09:19.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.007 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:09:19.007 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:19.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:19.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:09:19.007 00:09:19.007 --- 10.0.0.1 ping statistics --- 00:09:19.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.008 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=165174 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 165174 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 165174 ']' 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.008 [2024-12-14 02:50:33.268871] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:19.008 [2024-12-14 02:50:33.268919] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:19.008 [2024-12-14 02:50:33.348531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.008 [2024-12-14 02:50:33.369724] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:19.008 [2024-12-14 02:50:33.369759] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:19.008 [2024-12-14 02:50:33.369766] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:19.008 [2024-12-14 02:50:33.369773] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:19.008 [2024-12-14 02:50:33.369778] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:19.008 [2024-12-14 02:50:33.370246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.008 [2024-12-14 02:50:33.508730] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.008 Malloc0 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.008 [2024-12-14 02:50:33.562982] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=165203 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 165203 /var/tmp/bdevperf.sock 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 165203 ']' 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:19.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.008 [2024-12-14 02:50:33.615213] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:19.008 [2024-12-14 02:50:33.615254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165203 ] 00:09:19.008 [2024-12-14 02:50:33.690444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.008 [2024-12-14 02:50:33.713495] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.008 NVMe0n1 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.008 02:50:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:19.008 Running I/O for 10 seconds... 00:09:20.882 12246.00 IOPS, 47.84 MiB/s [2024-12-14T01:50:37.393Z] 12295.00 IOPS, 48.03 MiB/s [2024-12-14T01:50:38.330Z] 12424.00 IOPS, 48.53 MiB/s [2024-12-14T01:50:39.268Z] 12495.00 IOPS, 48.81 MiB/s [2024-12-14T01:50:40.206Z] 12486.60 IOPS, 48.78 MiB/s [2024-12-14T01:50:41.144Z] 12595.67 IOPS, 49.20 MiB/s [2024-12-14T01:50:42.082Z] 12567.71 IOPS, 49.09 MiB/s [2024-12-14T01:50:43.461Z] 12634.62 IOPS, 49.35 MiB/s [2024-12-14T01:50:44.029Z] 12634.89 IOPS, 49.36 MiB/s [2024-12-14T01:50:44.289Z] 12661.80 IOPS, 49.46 MiB/s 00:09:29.156 Latency(us) 00:09:29.156 [2024-12-14T01:50:44.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.156 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:29.156 Verification LBA range: start 0x0 length 0x4000 00:09:29.156 NVMe0n1 : 10.07 12679.59 49.53 0.00 0.00 80508.20 18599.74 56423.38 00:09:29.156 [2024-12-14T01:50:44.289Z] =================================================================================================================== 00:09:29.156 [2024-12-14T01:50:44.289Z] Total : 12679.59 49.53 0.00 0.00 80508.20 18599.74 56423.38 00:09:29.156 { 00:09:29.156 "results": [ 00:09:29.156 { 00:09:29.156 "job": "NVMe0n1", 00:09:29.156 "core_mask": "0x1", 00:09:29.156 "workload": "verify", 00:09:29.156 "status": "finished", 00:09:29.156 "verify_range": { 00:09:29.156 "start": 0, 00:09:29.156 "length": 16384 00:09:29.156 }, 00:09:29.156 "queue_depth": 1024, 00:09:29.156 "io_size": 4096, 00:09:29.156 "runtime": 10.066732, 00:09:29.156 "iops": 12679.586582815555, 00:09:29.156 "mibps": 49.52963508912326, 00:09:29.156 "io_failed": 0, 00:09:29.156 "io_timeout": 0, 00:09:29.156 "avg_latency_us": 80508.19814000616, 00:09:29.156 "min_latency_us": 18599.74095238095, 00:09:29.156 "max_latency_us": 56423.375238095236 00:09:29.156 } 00:09:29.156 ], 00:09:29.156 "core_count": 1 00:09:29.156 } 00:09:29.156 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 165203 00:09:29.156 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 165203 ']' 00:09:29.156 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 165203 00:09:29.156 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:29.156 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.156 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 165203 00:09:29.156 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.156 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.156 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 165203' 00:09:29.156 killing process with pid 165203 00:09:29.156 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 165203 00:09:29.156 Received shutdown signal, test time was about 10.000000 seconds 00:09:29.156 00:09:29.156 Latency(us) 00:09:29.156 [2024-12-14T01:50:44.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.156 [2024-12-14T01:50:44.289Z] =================================================================================================================== 00:09:29.156 [2024-12-14T01:50:44.289Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:29.156 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 165203 00:09:29.416 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:29.416 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:29.416 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:29.416 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:29.416 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:29.416 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:29.416 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:29.416 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:29.416 rmmod nvme_tcp 00:09:29.416 rmmod nvme_fabrics 00:09:29.416 rmmod nvme_keyring 00:09:29.416 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:29.416 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:29.416 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:29.416 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 165174 ']' 00:09:29.416 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 165174 00:09:29.416 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 165174 ']' 00:09:29.416 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 165174 00:09:29.416 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:29.416 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.416 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 165174 00:09:29.416 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:29.416 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:29.416 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 165174' 00:09:29.416 killing process with pid 165174 00:09:29.416 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 165174 00:09:29.416 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 165174 00:09:29.676 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:29.676 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:29.676 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:29.676 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:29.676 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:29.676 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:29.676 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:29.676 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:29.676 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:29.676 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.676 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.676 02:50:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.581 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:31.581 00:09:31.581 real 0m19.656s 00:09:31.581 user 0m23.040s 00:09:31.581 sys 0m5.920s 00:09:31.581 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.581 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:31.581 ************************************ 00:09:31.581 END TEST nvmf_queue_depth 00:09:31.581 ************************************ 00:09:31.841 02:50:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:31.841 02:50:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:31.841 02:50:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.841 02:50:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:31.841 ************************************ 00:09:31.841 START TEST nvmf_target_multipath 00:09:31.841 ************************************ 00:09:31.841 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:31.841 * Looking for test storage... 00:09:31.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.841 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:31.841 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:31.841 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:31.841 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:31.841 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.841 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.841 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.841 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.841 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.841 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.841 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.841 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.841 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.841 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.841 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.841 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:31.841 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:31.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.842 --rc genhtml_branch_coverage=1 00:09:31.842 --rc genhtml_function_coverage=1 00:09:31.842 --rc genhtml_legend=1 00:09:31.842 --rc geninfo_all_blocks=1 00:09:31.842 --rc geninfo_unexecuted_blocks=1 00:09:31.842 00:09:31.842 ' 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:31.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.842 --rc genhtml_branch_coverage=1 00:09:31.842 --rc genhtml_function_coverage=1 00:09:31.842 --rc genhtml_legend=1 00:09:31.842 --rc geninfo_all_blocks=1 00:09:31.842 --rc geninfo_unexecuted_blocks=1 00:09:31.842 00:09:31.842 ' 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:31.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.842 --rc genhtml_branch_coverage=1 00:09:31.842 --rc genhtml_function_coverage=1 00:09:31.842 --rc genhtml_legend=1 00:09:31.842 --rc geninfo_all_blocks=1 00:09:31.842 --rc geninfo_unexecuted_blocks=1 00:09:31.842 00:09:31.842 ' 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:31.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.842 --rc genhtml_branch_coverage=1 00:09:31.842 --rc genhtml_function_coverage=1 00:09:31.842 --rc genhtml_legend=1 00:09:31.842 --rc geninfo_all_blocks=1 00:09:31.842 --rc geninfo_unexecuted_blocks=1 00:09:31.842 00:09:31.842 ' 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:31.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:31.842 02:50:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:38.417 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:38.417 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:38.417 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:38.417 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:38.417 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:38.417 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:38.417 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:38.417 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:38.417 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:38.417 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:38.417 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:38.417 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:38.417 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:38.417 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:38.417 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:38.417 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:38.417 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:38.418 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:38.418 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:38.418 Found net devices under 0000:af:00.0: cvl_0_0 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:38.418 Found net devices under 0000:af:00.1: cvl_0_1 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:38.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:09:38.418 00:09:38.418 --- 10.0.0.2 ping statistics --- 00:09:38.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.418 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:38.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:09:38.418 00:09:38.418 --- 10.0.0.1 ping statistics --- 00:09:38.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.418 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:38.418 only one NIC for nvmf test 00:09:38.418 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:38.419 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:38.419 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:38.419 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:38.419 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:38.419 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:38.419 02:50:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:38.419 rmmod nvme_tcp 00:09:38.419 rmmod nvme_fabrics 00:09:38.419 rmmod nvme_keyring 00:09:38.419 02:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:38.419 02:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:38.419 02:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:38.419 02:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:38.419 02:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:38.419 02:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:38.419 02:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:38.419 02:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:38.419 02:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:38.419 02:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:38.419 02:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:38.419 02:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:38.419 02:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:38.419 02:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.419 02:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.419 02:50:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:40.326 00:09:40.326 real 0m8.388s 00:09:40.326 user 0m1.855s 00:09:40.326 sys 0m4.530s 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:40.326 ************************************ 00:09:40.326 END TEST nvmf_target_multipath 00:09:40.326 ************************************ 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:40.326 ************************************ 00:09:40.326 START TEST nvmf_zcopy 00:09:40.326 ************************************ 00:09:40.326 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:40.326 * Looking for test storage... 00:09:40.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:40.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.327 --rc genhtml_branch_coverage=1 00:09:40.327 --rc genhtml_function_coverage=1 00:09:40.327 --rc genhtml_legend=1 00:09:40.327 --rc geninfo_all_blocks=1 00:09:40.327 --rc geninfo_unexecuted_blocks=1 00:09:40.327 00:09:40.327 ' 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:40.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.327 --rc genhtml_branch_coverage=1 00:09:40.327 --rc genhtml_function_coverage=1 00:09:40.327 --rc genhtml_legend=1 00:09:40.327 --rc geninfo_all_blocks=1 00:09:40.327 --rc geninfo_unexecuted_blocks=1 00:09:40.327 00:09:40.327 ' 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:40.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.327 --rc genhtml_branch_coverage=1 00:09:40.327 --rc genhtml_function_coverage=1 00:09:40.327 --rc genhtml_legend=1 00:09:40.327 --rc geninfo_all_blocks=1 00:09:40.327 --rc geninfo_unexecuted_blocks=1 00:09:40.327 00:09:40.327 ' 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:40.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.327 --rc genhtml_branch_coverage=1 00:09:40.327 --rc genhtml_function_coverage=1 00:09:40.327 --rc genhtml_legend=1 00:09:40.327 --rc geninfo_all_blocks=1 00:09:40.327 --rc geninfo_unexecuted_blocks=1 00:09:40.327 00:09:40.327 ' 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:40.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:40.327 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:40.328 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:40.328 02:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:46.903 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:46.903 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.903 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:46.903 Found net devices under 0000:af:00.0: cvl_0_0 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:46.904 Found net devices under 0000:af:00.1: cvl_0_1 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:46.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:09:46.904 00:09:46.904 --- 10.0.0.2 ping statistics --- 00:09:46.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.904 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:46.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:09:46.904 00:09:46.904 --- 10.0.0.1 ping statistics --- 00:09:46.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.904 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=174190 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 174190 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 174190 ']' 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.904 [2024-12-14 02:51:01.479565] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:46.904 [2024-12-14 02:51:01.479609] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.904 [2024-12-14 02:51:01.558117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.904 [2024-12-14 02:51:01.579045] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.904 [2024-12-14 02:51:01.579079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.904 [2024-12-14 02:51:01.579086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.904 [2024-12-14 02:51:01.579092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.904 [2024-12-14 02:51:01.579097] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.904 [2024-12-14 02:51:01.579587] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.904 [2024-12-14 02:51:01.721591] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.904 [2024-12-14 02:51:01.745785] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.904 malloc0 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:46.904 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.905 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.905 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.905 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:46.905 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:46.905 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:46.905 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:46.905 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:46.905 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:46.905 { 00:09:46.905 "params": { 00:09:46.905 "name": "Nvme$subsystem", 00:09:46.905 "trtype": "$TEST_TRANSPORT", 00:09:46.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:46.905 "adrfam": "ipv4", 00:09:46.905 "trsvcid": "$NVMF_PORT", 00:09:46.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:46.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:46.905 "hdgst": ${hdgst:-false}, 00:09:46.905 "ddgst": ${ddgst:-false} 00:09:46.905 }, 00:09:46.905 "method": "bdev_nvme_attach_controller" 00:09:46.905 } 00:09:46.905 EOF 00:09:46.905 )") 00:09:46.905 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:46.905 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:46.905 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:46.905 02:51:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:46.905 "params": { 00:09:46.905 "name": "Nvme1", 00:09:46.905 "trtype": "tcp", 00:09:46.905 "traddr": "10.0.0.2", 00:09:46.905 "adrfam": "ipv4", 00:09:46.905 "trsvcid": "4420", 00:09:46.905 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:46.905 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:46.905 "hdgst": false, 00:09:46.905 "ddgst": false 00:09:46.905 }, 00:09:46.905 "method": "bdev_nvme_attach_controller" 00:09:46.905 }' 00:09:46.905 [2024-12-14 02:51:01.828602] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:46.905 [2024-12-14 02:51:01.828654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174276 ] 00:09:46.905 [2024-12-14 02:51:01.903146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.905 [2024-12-14 02:51:01.925451] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.164 Running I/O for 10 seconds... 00:09:49.480 8805.00 IOPS, 68.79 MiB/s [2024-12-14T01:51:05.186Z] 8872.50 IOPS, 69.32 MiB/s [2024-12-14T01:51:06.567Z] 8888.67 IOPS, 69.44 MiB/s [2024-12-14T01:51:07.506Z] 8900.00 IOPS, 69.53 MiB/s [2024-12-14T01:51:08.444Z] 8906.00 IOPS, 69.58 MiB/s [2024-12-14T01:51:09.382Z] 8908.00 IOPS, 69.59 MiB/s [2024-12-14T01:51:10.316Z] 8917.57 IOPS, 69.67 MiB/s [2024-12-14T01:51:11.253Z] 8912.62 IOPS, 69.63 MiB/s [2024-12-14T01:51:12.630Z] 8900.33 IOPS, 69.53 MiB/s [2024-12-14T01:51:12.630Z] 8903.00 IOPS, 69.55 MiB/s 00:09:57.497 Latency(us) 00:09:57.497 [2024-12-14T01:51:12.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.497 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:57.497 Verification LBA range: start 0x0 length 0x1000 00:09:57.497 Nvme1n1 : 10.01 8907.10 69.59 0.00 0.00 14329.31 2340.57 21595.67 00:09:57.497 [2024-12-14T01:51:12.630Z] =================================================================================================================== 00:09:57.497 [2024-12-14T01:51:12.630Z] Total : 8907.10 69.59 0.00 0.00 14329.31 2340.57 21595.67 00:09:57.497 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=176456 00:09:57.497 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:57.497 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:57.497 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:57.497 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:57.497 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:57.497 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:57.497 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:57.497 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:57.497 { 00:09:57.497 "params": { 00:09:57.498 "name": "Nvme$subsystem", 00:09:57.498 "trtype": "$TEST_TRANSPORT", 00:09:57.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:57.498 "adrfam": "ipv4", 00:09:57.498 "trsvcid": "$NVMF_PORT", 00:09:57.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:57.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:57.498 "hdgst": ${hdgst:-false}, 00:09:57.498 "ddgst": ${ddgst:-false} 00:09:57.498 }, 00:09:57.498 "method": "bdev_nvme_attach_controller" 00:09:57.498 } 00:09:57.498 EOF 00:09:57.498 )") 00:09:57.498 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:57.498 [2024-12-14 02:51:12.359182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.498 [2024-12-14 02:51:12.359213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.498 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:57.498 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:57.498 02:51:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:57.498 "params": { 00:09:57.498 "name": "Nvme1", 00:09:57.498 "trtype": "tcp", 00:09:57.498 "traddr": "10.0.0.2", 00:09:57.498 "adrfam": "ipv4", 00:09:57.498 "trsvcid": "4420", 00:09:57.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:57.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:57.498 "hdgst": false, 00:09:57.498 "ddgst": false 00:09:57.498 }, 00:09:57.498 "method": "bdev_nvme_attach_controller" 00:09:57.498 }' 00:09:57.498 [2024-12-14 02:51:12.371186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.498 [2024-12-14 02:51:12.371198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.498 [2024-12-14 02:51:12.383216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.498 [2024-12-14 02:51:12.383227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.498 [2024-12-14 02:51:12.395247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.498 [2024-12-14 02:51:12.395257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.498 [2024-12-14 02:51:12.399603] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:57.498 [2024-12-14 02:51:12.399642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176456 ] 00:09:57.498 [2024-12-14 02:51:12.407277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.498 [2024-12-14 02:51:12.407287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.498 [2024-12-14 02:51:12.419310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.498 [2024-12-14 02:51:12.419327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.498 [2024-12-14 02:51:12.431345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.498 [2024-12-14 02:51:12.431354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.498 [2024-12-14 02:51:12.443373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.498 [2024-12-14 02:51:12.443383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.498 [2024-12-14 02:51:12.455404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.498 [2024-12-14 02:51:12.455416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.498 [2024-12-14 02:51:12.467436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.498 [2024-12-14 02:51:12.467445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.498 [2024-12-14 02:51:12.473847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.498 [2024-12-14 02:51:12.479471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.498 [2024-12-14 02:51:12.479482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.498 [2024-12-14 02:51:12.491504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.498 [2024-12-14 02:51:12.491517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.498 [2024-12-14 02:51:12.496118] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.498 [2024-12-14 02:51:12.503535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.498 [2024-12-14 02:51:12.503546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.498 [2024-12-14 02:51:12.515580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.498 [2024-12-14 02:51:12.515603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.498 [2024-12-14 02:51:12.527603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.498 [2024-12-14 02:51:12.527618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.498 [2024-12-14 02:51:12.539633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.498 [2024-12-14 02:51:12.539647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.498 [2024-12-14 02:51:12.551667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.498 [2024-12-14 02:51:12.551679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.498 [2024-12-14 02:51:12.563697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.498 [2024-12-14 02:51:12.563711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.498 [2024-12-14 02:51:12.575727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.498 [2024-12-14 02:51:12.575736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.498 [2024-12-14 02:51:12.587773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.498 [2024-12-14 02:51:12.587793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.498 [2024-12-14 02:51:12.599796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.498 [2024-12-14 02:51:12.599809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.498 [2024-12-14 02:51:12.611824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.498 [2024-12-14 02:51:12.611837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.498 [2024-12-14 02:51:12.623854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.498 [2024-12-14 02:51:12.623864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.758 [2024-12-14 02:51:12.635901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.758 [2024-12-14 02:51:12.635912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.758 [2024-12-14 02:51:12.647917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.758 [2024-12-14 02:51:12.647930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.758 [2024-12-14 02:51:12.659951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.758 [2024-12-14 02:51:12.659965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.758 [2024-12-14 02:51:12.671982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.758 [2024-12-14 02:51:12.671996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.758 [2024-12-14 02:51:12.684015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.758 [2024-12-14 02:51:12.684024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.758 [2024-12-14 02:51:12.696046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.758 [2024-12-14 02:51:12.696056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.758 [2024-12-14 02:51:12.708081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.758 [2024-12-14 02:51:12.708094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.758 [2024-12-14 02:51:12.720112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.758 [2024-12-14 02:51:12.720122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.758 [2024-12-14 02:51:12.732143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.758 [2024-12-14 02:51:12.732152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.758 [2024-12-14 02:51:12.744179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.758 [2024-12-14 02:51:12.744189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.758 [2024-12-14 02:51:12.756212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.758 [2024-12-14 02:51:12.756223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.758 [2024-12-14 02:51:12.768243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.758 [2024-12-14 02:51:12.768252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.758 [2024-12-14 02:51:12.780274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.758 [2024-12-14 02:51:12.780284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.758 [2024-12-14 02:51:12.792308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.758 [2024-12-14 02:51:12.792322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.758 [2024-12-14 02:51:12.839579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.758 [2024-12-14 02:51:12.839596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.758 [2024-12-14 02:51:12.848463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.758 [2024-12-14 02:51:12.848475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.758 Running I/O for 5 seconds... 00:09:57.758 [2024-12-14 02:51:12.864604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.758 [2024-12-14 02:51:12.864623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.758 [2024-12-14 02:51:12.878173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.758 [2024-12-14 02:51:12.878192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.018 [2024-12-14 02:51:12.892064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.018 [2024-12-14 02:51:12.892082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.018 [2024-12-14 02:51:12.905826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.018 [2024-12-14 02:51:12.905844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.018 [2024-12-14 02:51:12.919626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.018 [2024-12-14 02:51:12.919644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.018 [2024-12-14 02:51:12.933609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.018 [2024-12-14 02:51:12.933627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.018 [2024-12-14 02:51:12.946948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.018 [2024-12-14 02:51:12.946970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.018 [2024-12-14 02:51:12.960670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.018 [2024-12-14 02:51:12.960688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.018 [2024-12-14 02:51:12.974488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.018 [2024-12-14 02:51:12.974506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.018 [2024-12-14 02:51:12.988022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.018 [2024-12-14 02:51:12.988040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.018 [2024-12-14 02:51:13.001980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.018 [2024-12-14 02:51:13.001999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.018 [2024-12-14 02:51:13.015732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.018 [2024-12-14 02:51:13.015749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.018 [2024-12-14 02:51:13.029408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.018 [2024-12-14 02:51:13.029426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.018 [2024-12-14 02:51:13.043591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.018 [2024-12-14 02:51:13.043609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.018 [2024-12-14 02:51:13.057389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.018 [2024-12-14 02:51:13.057406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.018 [2024-12-14 02:51:13.071494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.018 [2024-12-14 02:51:13.071512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.018 [2024-12-14 02:51:13.081937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.018 [2024-12-14 02:51:13.081955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.018 [2024-12-14 02:51:13.095926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.018 [2024-12-14 02:51:13.095944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.018 [2024-12-14 02:51:13.109781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.018 [2024-12-14 02:51:13.109799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.018 [2024-12-14 02:51:13.123781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.018 [2024-12-14 02:51:13.123798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.018 [2024-12-14 02:51:13.137183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.018 [2024-12-14 02:51:13.137200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.278 [2024-12-14 02:51:13.151009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.278 [2024-12-14 02:51:13.151026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.278 [2024-12-14 02:51:13.164760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.278 [2024-12-14 02:51:13.164784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.278 [2024-12-14 02:51:13.178673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.278 [2024-12-14 02:51:13.178693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.278 [2024-12-14 02:51:13.192667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.278 [2024-12-14 02:51:13.192686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.278 [2024-12-14 02:51:13.204010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.278 [2024-12-14 02:51:13.204031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.278 [2024-12-14 02:51:13.217797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.278 [2024-12-14 02:51:13.217815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.278 [2024-12-14 02:51:13.231507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.278 [2024-12-14 02:51:13.231524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.278 [2024-12-14 02:51:13.245123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.278 [2024-12-14 02:51:13.245141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.278 [2024-12-14 02:51:13.258897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.278 [2024-12-14 02:51:13.258915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.278 [2024-12-14 02:51:13.272368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.278 [2024-12-14 02:51:13.272385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.278 [2024-12-14 02:51:13.286027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.278 [2024-12-14 02:51:13.286044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.278 [2024-12-14 02:51:13.299856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.278 [2024-12-14 02:51:13.299873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.278 [2024-12-14 02:51:13.313298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.278 [2024-12-14 02:51:13.313321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.278 [2024-12-14 02:51:13.326847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.278 [2024-12-14 02:51:13.326865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.278 [2024-12-14 02:51:13.340763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.278 [2024-12-14 02:51:13.340780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.278 [2024-12-14 02:51:13.354504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.278 [2024-12-14 02:51:13.354521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.278 [2024-12-14 02:51:13.368299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.278 [2024-12-14 02:51:13.368324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.278 [2024-12-14 02:51:13.381859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.278 [2024-12-14 02:51:13.381878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.278 [2024-12-14 02:51:13.395815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.278 [2024-12-14 02:51:13.395834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.278 [2024-12-14 02:51:13.409274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.278 [2024-12-14 02:51:13.409293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.538 [2024-12-14 02:51:13.423042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.538 [2024-12-14 02:51:13.423060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.538 [2024-12-14 02:51:13.436852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.538 [2024-12-14 02:51:13.436870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.538 [2024-12-14 02:51:13.450621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.538 [2024-12-14 02:51:13.450641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.538 [2024-12-14 02:51:13.463908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.538 [2024-12-14 02:51:13.463928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.538 [2024-12-14 02:51:13.477652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.538 [2024-12-14 02:51:13.477669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.538 [2024-12-14 02:51:13.491396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.538 [2024-12-14 02:51:13.491414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.538 [2024-12-14 02:51:13.505091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.538 [2024-12-14 02:51:13.505110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.538 [2024-12-14 02:51:13.519027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.538 [2024-12-14 02:51:13.519046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.538 [2024-12-14 02:51:13.532666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.538 [2024-12-14 02:51:13.532685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.538 [2024-12-14 02:51:13.546374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.538 [2024-12-14 02:51:13.546391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.538 [2024-12-14 02:51:13.560217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.538 [2024-12-14 02:51:13.560235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.538 [2024-12-14 02:51:13.574014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.538 [2024-12-14 02:51:13.574034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.538 [2024-12-14 02:51:13.587460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.538 [2024-12-14 02:51:13.587480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.538 [2024-12-14 02:51:13.601103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.538 [2024-12-14 02:51:13.601122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.538 [2024-12-14 02:51:13.614597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.538 [2024-12-14 02:51:13.614615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.538 [2024-12-14 02:51:13.628262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.538 [2024-12-14 02:51:13.628281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.538 [2024-12-14 02:51:13.642346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.538 [2024-12-14 02:51:13.642364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.538 [2024-12-14 02:51:13.655755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.538 [2024-12-14 02:51:13.655775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.538 [2024-12-14 02:51:13.669331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.538 [2024-12-14 02:51:13.669349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.798 [2024-12-14 02:51:13.682686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.798 [2024-12-14 02:51:13.682704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.798 [2024-12-14 02:51:13.696377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.798 [2024-12-14 02:51:13.696395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.798 [2024-12-14 02:51:13.710159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.798 [2024-12-14 02:51:13.710178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.798 [2024-12-14 02:51:13.724212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.798 [2024-12-14 02:51:13.724231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.798 [2024-12-14 02:51:13.738106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.798 [2024-12-14 02:51:13.738125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.798 [2024-12-14 02:51:13.751854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.798 [2024-12-14 02:51:13.751871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.798 [2024-12-14 02:51:13.765633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.798 [2024-12-14 02:51:13.765655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.798 [2024-12-14 02:51:13.779407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.798 [2024-12-14 02:51:13.779424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.798 [2024-12-14 02:51:13.793261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.798 [2024-12-14 02:51:13.793279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.798 [2024-12-14 02:51:13.807176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.798 [2024-12-14 02:51:13.807196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.799 [2024-12-14 02:51:13.821200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.799 [2024-12-14 02:51:13.821218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.799 [2024-12-14 02:51:13.834596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.799 [2024-12-14 02:51:13.834613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.799 [2024-12-14 02:51:13.848425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.799 [2024-12-14 02:51:13.848442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.799 17022.00 IOPS, 132.98 MiB/s [2024-12-14T01:51:13.932Z] [2024-12-14 02:51:13.861866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.799 [2024-12-14 02:51:13.861883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.799 [2024-12-14 02:51:13.875495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.799 [2024-12-14 02:51:13.875512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.799 [2024-12-14 02:51:13.889182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.799 [2024-12-14 02:51:13.889200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.799 [2024-12-14 02:51:13.903216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.799 [2024-12-14 02:51:13.903233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.799 [2024-12-14 02:51:13.917153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.799 [2024-12-14 02:51:13.917170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.059 [2024-12-14 02:51:13.930971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.059 [2024-12-14 02:51:13.930989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.059 [2024-12-14 02:51:13.944589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.059 [2024-12-14 02:51:13.944607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.059 [2024-12-14 02:51:13.958216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.059 [2024-12-14 02:51:13.958234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.059 [2024-12-14 02:51:13.971915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.059 [2024-12-14 02:51:13.971936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.059 [2024-12-14 02:51:13.985810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.059 [2024-12-14 02:51:13.985828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.059 [2024-12-14 02:51:13.999646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.059 [2024-12-14 02:51:13.999664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.059 [2024-12-14 02:51:14.013486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.059 [2024-12-14 02:51:14.013504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.059 [2024-12-14 02:51:14.027411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.059 [2024-12-14 02:51:14.027429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.059 [2024-12-14 02:51:14.041196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.059 [2024-12-14 02:51:14.041213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.059 [2024-12-14 02:51:14.055051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.059 [2024-12-14 02:51:14.055068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.059 [2024-12-14 02:51:14.068543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.059 [2024-12-14 02:51:14.068560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.059 [2024-12-14 02:51:14.082120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.059 [2024-12-14 02:51:14.082137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.059 [2024-12-14 02:51:14.095983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.059 [2024-12-14 02:51:14.096002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.059 [2024-12-14 02:51:14.110351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.059 [2024-12-14 02:51:14.110368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.059 [2024-12-14 02:51:14.121457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.059 [2024-12-14 02:51:14.121474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.059 [2024-12-14 02:51:14.135664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.059 [2024-12-14 02:51:14.135681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.059 [2024-12-14 02:51:14.149598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.059 [2024-12-14 02:51:14.149616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.059 [2024-12-14 02:51:14.163526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.059 [2024-12-14 02:51:14.163544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.059 [2024-12-14 02:51:14.177487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.059 [2024-12-14 02:51:14.177507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.319 [2024-12-14 02:51:14.191441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.319 [2024-12-14 02:51:14.191460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.319 [2024-12-14 02:51:14.205104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.319 [2024-12-14 02:51:14.205124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.319 [2024-12-14 02:51:14.218636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.319 [2024-12-14 02:51:14.218654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.319 [2024-12-14 02:51:14.232232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.319 [2024-12-14 02:51:14.232254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.319 [2024-12-14 02:51:14.246275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.319 [2024-12-14 02:51:14.246294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.319 [2024-12-14 02:51:14.259894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.319 [2024-12-14 02:51:14.259911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.319 [2024-12-14 02:51:14.273908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.319 [2024-12-14 02:51:14.273925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.319 [2024-12-14 02:51:14.287277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.319 [2024-12-14 02:51:14.287295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.319 [2024-12-14 02:51:14.301272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.319 [2024-12-14 02:51:14.301291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.319 [2024-12-14 02:51:14.314668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.319 [2024-12-14 02:51:14.314686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.319 [2024-12-14 02:51:14.328350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.319 [2024-12-14 02:51:14.328368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.319 [2024-12-14 02:51:14.341842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.319 [2024-12-14 02:51:14.341859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.319 [2024-12-14 02:51:14.355744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.319 [2024-12-14 02:51:14.355762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.319 [2024-12-14 02:51:14.369524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.319 [2024-12-14 02:51:14.369542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.319 [2024-12-14 02:51:14.382958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.319 [2024-12-14 02:51:14.382975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.319 [2024-12-14 02:51:14.396320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.319 [2024-12-14 02:51:14.396338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.319 [2024-12-14 02:51:14.409803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.319 [2024-12-14 02:51:14.409821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.319 [2024-12-14 02:51:14.423336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.319 [2024-12-14 02:51:14.423353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.319 [2024-12-14 02:51:14.437063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.319 [2024-12-14 02:51:14.437080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.319 [2024-12-14 02:51:14.450764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.319 [2024-12-14 02:51:14.450782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.579 [2024-12-14 02:51:14.464772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.579 [2024-12-14 02:51:14.464790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.579 [2024-12-14 02:51:14.478293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.579 [2024-12-14 02:51:14.478317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.579 [2024-12-14 02:51:14.491536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.579 [2024-12-14 02:51:14.491557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.579 [2024-12-14 02:51:14.505202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.579 [2024-12-14 02:51:14.505220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.579 [2024-12-14 02:51:14.518323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.579 [2024-12-14 02:51:14.518342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.579 [2024-12-14 02:51:14.532270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.579 [2024-12-14 02:51:14.532287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.579 [2024-12-14 02:51:14.545636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.579 [2024-12-14 02:51:14.545654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.579 [2024-12-14 02:51:14.559427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.579 [2024-12-14 02:51:14.559445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.579 [2024-12-14 02:51:14.573462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.579 [2024-12-14 02:51:14.573480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.579 [2024-12-14 02:51:14.586658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.579 [2024-12-14 02:51:14.586676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.579 [2024-12-14 02:51:14.600142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.579 [2024-12-14 02:51:14.600160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.579 [2024-12-14 02:51:14.613929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.579 [2024-12-14 02:51:14.613947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.579 [2024-12-14 02:51:14.627455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.579 [2024-12-14 02:51:14.627472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.579 [2024-12-14 02:51:14.641263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.579 [2024-12-14 02:51:14.641280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.579 [2024-12-14 02:51:14.655334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.579 [2024-12-14 02:51:14.655352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.579 [2024-12-14 02:51:14.669023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.579 [2024-12-14 02:51:14.669040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.579 [2024-12-14 02:51:14.682813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.579 [2024-12-14 02:51:14.682831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.579 [2024-12-14 02:51:14.696600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.579 [2024-12-14 02:51:14.696618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.579 [2024-12-14 02:51:14.710118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.579 [2024-12-14 02:51:14.710137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.839 [2024-12-14 02:51:14.724119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.839 [2024-12-14 02:51:14.724137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.839 [2024-12-14 02:51:14.737488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.839 [2024-12-14 02:51:14.737506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.839 [2024-12-14 02:51:14.751164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.839 [2024-12-14 02:51:14.751187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.839 [2024-12-14 02:51:14.764910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.839 [2024-12-14 02:51:14.764930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.839 [2024-12-14 02:51:14.778977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.839 [2024-12-14 02:51:14.778996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.839 [2024-12-14 02:51:14.792962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.839 [2024-12-14 02:51:14.792980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.839 [2024-12-14 02:51:14.806280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.839 [2024-12-14 02:51:14.806298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.839 [2024-12-14 02:51:14.820168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.839 [2024-12-14 02:51:14.820188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.839 [2024-12-14 02:51:14.833870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.839 [2024-12-14 02:51:14.833890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.839 [2024-12-14 02:51:14.847508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.839 [2024-12-14 02:51:14.847526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.839 17054.00 IOPS, 133.23 MiB/s [2024-12-14T01:51:14.972Z] [2024-12-14 02:51:14.861349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.839 [2024-12-14 02:51:14.861367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.839 [2024-12-14 02:51:14.875026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.839 [2024-12-14 02:51:14.875045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.839 [2024-12-14 02:51:14.888464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.839 [2024-12-14 02:51:14.888482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.839 [2024-12-14 02:51:14.902125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.839 [2024-12-14 02:51:14.902143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.839 [2024-12-14 02:51:14.915883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.839 [2024-12-14 02:51:14.915902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.839 [2024-12-14 02:51:14.929457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.839 [2024-12-14 02:51:14.929476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.839 [2024-12-14 02:51:14.943121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.839 [2024-12-14 02:51:14.943140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.839 [2024-12-14 02:51:14.956453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.839 [2024-12-14 02:51:14.956471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.839 [2024-12-14 02:51:14.970070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.839 [2024-12-14 02:51:14.970089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.098 [2024-12-14 02:51:14.983656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.098 [2024-12-14 02:51:14.983675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.099 [2024-12-14 02:51:14.997367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.099 [2024-12-14 02:51:14.997386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.099 [2024-12-14 02:51:15.011139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.099 [2024-12-14 02:51:15.011157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.099 [2024-12-14 02:51:15.025105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.099 [2024-12-14 02:51:15.025124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.099 [2024-12-14 02:51:15.039061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.099 [2024-12-14 02:51:15.039080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.099 [2024-12-14 02:51:15.052748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.099 [2024-12-14 02:51:15.052766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.099 [2024-12-14 02:51:15.066428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.099 [2024-12-14 02:51:15.066446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.099 [2024-12-14 02:51:15.079639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.099 [2024-12-14 02:51:15.079657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.099 [2024-12-14 02:51:15.093253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.099 [2024-12-14 02:51:15.093272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.099 [2024-12-14 02:51:15.107159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.099 [2024-12-14 02:51:15.107177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.099 [2024-12-14 02:51:15.120796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.099 [2024-12-14 02:51:15.120814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.099 [2024-12-14 02:51:15.134705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.099 [2024-12-14 02:51:15.134724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.099 [2024-12-14 02:51:15.148355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.099 [2024-12-14 02:51:15.148372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.099 [2024-12-14 02:51:15.161913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.099 [2024-12-14 02:51:15.161931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.099 [2024-12-14 02:51:15.175346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.099 [2024-12-14 02:51:15.175364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.099 [2024-12-14 02:51:15.189519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.099 [2024-12-14 02:51:15.189538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.099 [2024-12-14 02:51:15.199885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.099 [2024-12-14 02:51:15.199903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.099 [2024-12-14 02:51:15.214373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.099 [2024-12-14 02:51:15.214391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.099 [2024-12-14 02:51:15.228208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.099 [2024-12-14 02:51:15.228228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.358 [2024-12-14 02:51:15.241845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.358 [2024-12-14 02:51:15.241864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.358 [2024-12-14 02:51:15.255528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.358 [2024-12-14 02:51:15.255546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.358 [2024-12-14 02:51:15.268576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.358 [2024-12-14 02:51:15.268594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.358 [2024-12-14 02:51:15.282555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.358 [2024-12-14 02:51:15.282583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.358 [2024-12-14 02:51:15.296080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.358 [2024-12-14 02:51:15.296097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.358 [2024-12-14 02:51:15.309899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.358 [2024-12-14 02:51:15.309916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.358 [2024-12-14 02:51:15.323377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.358 [2024-12-14 02:51:15.323394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.358 [2024-12-14 02:51:15.337098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.358 [2024-12-14 02:51:15.337116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.358 [2024-12-14 02:51:15.350833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.358 [2024-12-14 02:51:15.350850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.358 [2024-12-14 02:51:15.364980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.358 [2024-12-14 02:51:15.364998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.358 [2024-12-14 02:51:15.378559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.358 [2024-12-14 02:51:15.378586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.358 [2024-12-14 02:51:15.392364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.358 [2024-12-14 02:51:15.392382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.358 [2024-12-14 02:51:15.406198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.358 [2024-12-14 02:51:15.406216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.358 [2024-12-14 02:51:15.420477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.358 [2024-12-14 02:51:15.420495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.358 [2024-12-14 02:51:15.435061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.358 [2024-12-14 02:51:15.435078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.358 [2024-12-14 02:51:15.448665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.358 [2024-12-14 02:51:15.448683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.358 [2024-12-14 02:51:15.462152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.358 [2024-12-14 02:51:15.462170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.358 [2024-12-14 02:51:15.475672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.358 [2024-12-14 02:51:15.475690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.358 [2024-12-14 02:51:15.489067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.358 [2024-12-14 02:51:15.489086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.617 [2024-12-14 02:51:15.502927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.617 [2024-12-14 02:51:15.502944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.617 [2024-12-14 02:51:15.516968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.617 [2024-12-14 02:51:15.516985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.617 [2024-12-14 02:51:15.530436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.617 [2024-12-14 02:51:15.530454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.617 [2024-12-14 02:51:15.543849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.617 [2024-12-14 02:51:15.543866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.617 [2024-12-14 02:51:15.557610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.617 [2024-12-14 02:51:15.557627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.617 [2024-12-14 02:51:15.571386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.617 [2024-12-14 02:51:15.571404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.617 [2024-12-14 02:51:15.585057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.617 [2024-12-14 02:51:15.585075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.617 [2024-12-14 02:51:15.598814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.617 [2024-12-14 02:51:15.598833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.617 [2024-12-14 02:51:15.612294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.617 [2024-12-14 02:51:15.612318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.617 [2024-12-14 02:51:15.626104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.618 [2024-12-14 02:51:15.626121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.618 [2024-12-14 02:51:15.639818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.618 [2024-12-14 02:51:15.639836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.618 [2024-12-14 02:51:15.653068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.618 [2024-12-14 02:51:15.653085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.618 [2024-12-14 02:51:15.666630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.618 [2024-12-14 02:51:15.666647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.618 [2024-12-14 02:51:15.680446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.618 [2024-12-14 02:51:15.680464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.618 [2024-12-14 02:51:15.694592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.618 [2024-12-14 02:51:15.694614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.618 [2024-12-14 02:51:15.708450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.618 [2024-12-14 02:51:15.708468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.618 [2024-12-14 02:51:15.722384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.618 [2024-12-14 02:51:15.722402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.618 [2024-12-14 02:51:15.736171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.618 [2024-12-14 02:51:15.736189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.618 [2024-12-14 02:51:15.749629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.618 [2024-12-14 02:51:15.749647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.877 [2024-12-14 02:51:15.763246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.877 [2024-12-14 02:51:15.763263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.877 [2024-12-14 02:51:15.777159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.878 [2024-12-14 02:51:15.777182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.878 [2024-12-14 02:51:15.790791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.878 [2024-12-14 02:51:15.790809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.878 [2024-12-14 02:51:15.804376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.878 [2024-12-14 02:51:15.804394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.878 [2024-12-14 02:51:15.818039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.878 [2024-12-14 02:51:15.818060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.878 [2024-12-14 02:51:15.831246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.878 [2024-12-14 02:51:15.831263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.878 [2024-12-14 02:51:15.845001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.878 [2024-12-14 02:51:15.845018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.878 [2024-12-14 02:51:15.859239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.878 [2024-12-14 02:51:15.859257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.878 17083.33 IOPS, 133.46 MiB/s [2024-12-14T01:51:16.011Z] [2024-12-14 02:51:15.873030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.878 [2024-12-14 02:51:15.873048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.878 [2024-12-14 02:51:15.886870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.878 [2024-12-14 02:51:15.886888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.878 [2024-12-14 02:51:15.900229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.878 [2024-12-14 02:51:15.900246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.878 [2024-12-14 02:51:15.913860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.878 [2024-12-14 02:51:15.913878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.878 [2024-12-14 02:51:15.927376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.878 [2024-12-14 02:51:15.927394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.878 [2024-12-14 02:51:15.940988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.878 [2024-12-14 02:51:15.941006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.878 [2024-12-14 02:51:15.954750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.878 [2024-12-14 02:51:15.954767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.878 [2024-12-14 02:51:15.968509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.878 [2024-12-14 02:51:15.968526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.878 [2024-12-14 02:51:15.981956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.878 [2024-12-14 02:51:15.981973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.878 [2024-12-14 02:51:15.995632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.878 [2024-12-14 02:51:15.995649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.878 [2024-12-14 02:51:16.009533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.878 [2024-12-14 02:51:16.009551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.138 [2024-12-14 02:51:16.023404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.138 [2024-12-14 02:51:16.023422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.138 [2024-12-14 02:51:16.036708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.138 [2024-12-14 02:51:16.036730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.138 [2024-12-14 02:51:16.050759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.138 [2024-12-14 02:51:16.050779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.138 [2024-12-14 02:51:16.064577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.138 [2024-12-14 02:51:16.064594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.138 [2024-12-14 02:51:16.078153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.138 [2024-12-14 02:51:16.078172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.138 [2024-12-14 02:51:16.091847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.138 [2024-12-14 02:51:16.091865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.138 [2024-12-14 02:51:16.105406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.138 [2024-12-14 02:51:16.105425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.138 [2024-12-14 02:51:16.119244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.138 [2024-12-14 02:51:16.119264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.138 [2024-12-14 02:51:16.133135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.138 [2024-12-14 02:51:16.133155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.138 [2024-12-14 02:51:16.147089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.138 [2024-12-14 02:51:16.147107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.138 [2024-12-14 02:51:16.160651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.138 [2024-12-14 02:51:16.160669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.138 [2024-12-14 02:51:16.174415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.138 [2024-12-14 02:51:16.174433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.138 [2024-12-14 02:51:16.188236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.138 [2024-12-14 02:51:16.188254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.138 [2024-12-14 02:51:16.202255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.138 [2024-12-14 02:51:16.202273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.138 [2024-12-14 02:51:16.216339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.138 [2024-12-14 02:51:16.216358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.138 [2024-12-14 02:51:16.227359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.138 [2024-12-14 02:51:16.227377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.138 [2024-12-14 02:51:16.241802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.138 [2024-12-14 02:51:16.241821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.138 [2024-12-14 02:51:16.255449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.138 [2024-12-14 02:51:16.255469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.138 [2024-12-14 02:51:16.269368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.138 [2024-12-14 02:51:16.269387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.397 [2024-12-14 02:51:16.282710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.397 [2024-12-14 02:51:16.282728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.397 [2024-12-14 02:51:16.296180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.398 [2024-12-14 02:51:16.296204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.398 [2024-12-14 02:51:16.310174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.398 [2024-12-14 02:51:16.310193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.398 [2024-12-14 02:51:16.324201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.398 [2024-12-14 02:51:16.324220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.398 [2024-12-14 02:51:16.338048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.398 [2024-12-14 02:51:16.338066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.398 [2024-12-14 02:51:16.351643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.398 [2024-12-14 02:51:16.351661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.398 [2024-12-14 02:51:16.365419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.398 [2024-12-14 02:51:16.365437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.398 [2024-12-14 02:51:16.379061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.398 [2024-12-14 02:51:16.379080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.398 [2024-12-14 02:51:16.392531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.398 [2024-12-14 02:51:16.392549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.398 [2024-12-14 02:51:16.406178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.398 [2024-12-14 02:51:16.406196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.398 [2024-12-14 02:51:16.419663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.398 [2024-12-14 02:51:16.419680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.398 [2024-12-14 02:51:16.433239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.398 [2024-12-14 02:51:16.433256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.398 [2024-12-14 02:51:16.447123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.398 [2024-12-14 02:51:16.447142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.398 [2024-12-14 02:51:16.461035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.398 [2024-12-14 02:51:16.461052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.398 [2024-12-14 02:51:16.475123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.398 [2024-12-14 02:51:16.475140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.398 [2024-12-14 02:51:16.489203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.398 [2024-12-14 02:51:16.489222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.398 [2024-12-14 02:51:16.502904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.398 [2024-12-14 02:51:16.502922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.398 [2024-12-14 02:51:16.516540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.398 [2024-12-14 02:51:16.516558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.398 [2024-12-14 02:51:16.529837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.398 [2024-12-14 02:51:16.529855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.658 [2024-12-14 02:51:16.543819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.658 [2024-12-14 02:51:16.543837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.658 [2024-12-14 02:51:16.557646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.658 [2024-12-14 02:51:16.557663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.658 [2024-12-14 02:51:16.571508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.658 [2024-12-14 02:51:16.571525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.658 [2024-12-14 02:51:16.585075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.658 [2024-12-14 02:51:16.585093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.658 [2024-12-14 02:51:16.599142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.658 [2024-12-14 02:51:16.599160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.658 [2024-12-14 02:51:16.613477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.658 [2024-12-14 02:51:16.613495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.658 [2024-12-14 02:51:16.627380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.658 [2024-12-14 02:51:16.627398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.658 [2024-12-14 02:51:16.641070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.658 [2024-12-14 02:51:16.641088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.658 [2024-12-14 02:51:16.654653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.658 [2024-12-14 02:51:16.654670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.658 [2024-12-14 02:51:16.668057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.658 [2024-12-14 02:51:16.668075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.658 [2024-12-14 02:51:16.681884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.658 [2024-12-14 02:51:16.681903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.658 [2024-12-14 02:51:16.695859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.658 [2024-12-14 02:51:16.695877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.658 [2024-12-14 02:51:16.709368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.658 [2024-12-14 02:51:16.709386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.658 [2024-12-14 02:51:16.723173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.658 [2024-12-14 02:51:16.723191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.658 [2024-12-14 02:51:16.737062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.658 [2024-12-14 02:51:16.737080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.658 [2024-12-14 02:51:16.750727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.658 [2024-12-14 02:51:16.750745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.658 [2024-12-14 02:51:16.764053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.658 [2024-12-14 02:51:16.764070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.658 [2024-12-14 02:51:16.777948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.658 [2024-12-14 02:51:16.777965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.918 [2024-12-14 02:51:16.791715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.918 [2024-12-14 02:51:16.791733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.918 [2024-12-14 02:51:16.805630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.918 [2024-12-14 02:51:16.805648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.918 [2024-12-14 02:51:16.819161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.918 [2024-12-14 02:51:16.819179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.918 [2024-12-14 02:51:16.832865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.918 [2024-12-14 02:51:16.832882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.918 [2024-12-14 02:51:16.846615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.918 [2024-12-14 02:51:16.846633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.918 [2024-12-14 02:51:16.860446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.918 [2024-12-14 02:51:16.860463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.918 17093.00 IOPS, 133.54 MiB/s [2024-12-14T01:51:17.051Z] [2024-12-14 02:51:16.874032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.918 [2024-12-14 02:51:16.874050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.918 [2024-12-14 02:51:16.887855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.918 [2024-12-14 02:51:16.887874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.918 [2024-12-14 02:51:16.901425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.918 [2024-12-14 02:51:16.901442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.918 [2024-12-14 02:51:16.914926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.918 [2024-12-14 02:51:16.914943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.918 [2024-12-14 02:51:16.928682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.918 [2024-12-14 02:51:16.928699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.918 [2024-12-14 02:51:16.943007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.918 [2024-12-14 02:51:16.943026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.918 [2024-12-14 02:51:16.956528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.918 [2024-12-14 02:51:16.956546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.918 [2024-12-14 02:51:16.970159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.918 [2024-12-14 02:51:16.970177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.918 [2024-12-14 02:51:16.983872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.918 [2024-12-14 02:51:16.983889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.918 [2024-12-14 02:51:16.997658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.918 [2024-12-14 02:51:16.997676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.918 [2024-12-14 02:51:17.011665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.918 [2024-12-14 02:51:17.011683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.918 [2024-12-14 02:51:17.025286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.918 [2024-12-14 02:51:17.025304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.918 [2024-12-14 02:51:17.039333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.918 [2024-12-14 02:51:17.039351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.178 [2024-12-14 02:51:17.053017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.178 [2024-12-14 02:51:17.053035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.178 [2024-12-14 02:51:17.066620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.178 [2024-12-14 02:51:17.066637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.178 [2024-12-14 02:51:17.080304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.178 [2024-12-14 02:51:17.080327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.178 [2024-12-14 02:51:17.093903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.178 [2024-12-14 02:51:17.093921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.178 [2024-12-14 02:51:17.108008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.178 [2024-12-14 02:51:17.108025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.178 [2024-12-14 02:51:17.121830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.178 [2024-12-14 02:51:17.121848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.178 [2024-12-14 02:51:17.135831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.178 [2024-12-14 02:51:17.135848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.178 [2024-12-14 02:51:17.149098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.178 [2024-12-14 02:51:17.149115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.178 [2024-12-14 02:51:17.162540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.178 [2024-12-14 02:51:17.162558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.178 [2024-12-14 02:51:17.176269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.178 [2024-12-14 02:51:17.176287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.178 [2024-12-14 02:51:17.189831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.178 [2024-12-14 02:51:17.189848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.178 [2024-12-14 02:51:17.203454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.178 [2024-12-14 02:51:17.203471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.178 [2024-12-14 02:51:17.217013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.178 [2024-12-14 02:51:17.217030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.178 [2024-12-14 02:51:17.230876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.178 [2024-12-14 02:51:17.230893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.178 [2024-12-14 02:51:17.244583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.178 [2024-12-14 02:51:17.244600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.178 [2024-12-14 02:51:17.257764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.178 [2024-12-14 02:51:17.257781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.178 [2024-12-14 02:51:17.271699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.178 [2024-12-14 02:51:17.271717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.178 [2024-12-14 02:51:17.285501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.178 [2024-12-14 02:51:17.285520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.178 [2024-12-14 02:51:17.299041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.178 [2024-12-14 02:51:17.299060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.438 [2024-12-14 02:51:17.312480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.438 [2024-12-14 02:51:17.312497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.438 [2024-12-14 02:51:17.326249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.438 [2024-12-14 02:51:17.326274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.438 [2024-12-14 02:51:17.340147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.438 [2024-12-14 02:51:17.340164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.438 [2024-12-14 02:51:17.353639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.438 [2024-12-14 02:51:17.353656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.438 [2024-12-14 02:51:17.367538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.438 [2024-12-14 02:51:17.367556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.438 [2024-12-14 02:51:17.381201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.438 [2024-12-14 02:51:17.381218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.438 [2024-12-14 02:51:17.394986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.438 [2024-12-14 02:51:17.395003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.438 [2024-12-14 02:51:17.408767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.438 [2024-12-14 02:51:17.408785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.438 [2024-12-14 02:51:17.422772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.438 [2024-12-14 02:51:17.422789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.438 [2024-12-14 02:51:17.433731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.438 [2024-12-14 02:51:17.433748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.438 [2024-12-14 02:51:17.447572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.438 [2024-12-14 02:51:17.447590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.438 [2024-12-14 02:51:17.461325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.438 [2024-12-14 02:51:17.461344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.438 [2024-12-14 02:51:17.474774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.438 [2024-12-14 02:51:17.474792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.438 [2024-12-14 02:51:17.488423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.438 [2024-12-14 02:51:17.488444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.438 [2024-12-14 02:51:17.502061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.438 [2024-12-14 02:51:17.502081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.438 [2024-12-14 02:51:17.515821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.438 [2024-12-14 02:51:17.515839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.438 [2024-12-14 02:51:17.529792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.438 [2024-12-14 02:51:17.529811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.438 [2024-12-14 02:51:17.543693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.438 [2024-12-14 02:51:17.543711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.438 [2024-12-14 02:51:17.557636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.438 [2024-12-14 02:51:17.557655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.698 [2024-12-14 02:51:17.571284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.698 [2024-12-14 02:51:17.571303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.698 [2024-12-14 02:51:17.585058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.698 [2024-12-14 02:51:17.585081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.698 [2024-12-14 02:51:17.599189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.698 [2024-12-14 02:51:17.599207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.698 [2024-12-14 02:51:17.612708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.698 [2024-12-14 02:51:17.612726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.698 [2024-12-14 02:51:17.626359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.698 [2024-12-14 02:51:17.626378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.698 [2024-12-14 02:51:17.640057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.698 [2024-12-14 02:51:17.640076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.698 [2024-12-14 02:51:17.653510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.698 [2024-12-14 02:51:17.653529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.698 [2024-12-14 02:51:17.666633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.698 [2024-12-14 02:51:17.666651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.698 [2024-12-14 02:51:17.680416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.698 [2024-12-14 02:51:17.680435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.698 [2024-12-14 02:51:17.694842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.698 [2024-12-14 02:51:17.694859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.698 [2024-12-14 02:51:17.708569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.698 [2024-12-14 02:51:17.708587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.698 [2024-12-14 02:51:17.722054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.698 [2024-12-14 02:51:17.722073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.698 [2024-12-14 02:51:17.735723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.698 [2024-12-14 02:51:17.735741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.698 [2024-12-14 02:51:17.749405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.698 [2024-12-14 02:51:17.749424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.698 [2024-12-14 02:51:17.763021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.698 [2024-12-14 02:51:17.763039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.698 [2024-12-14 02:51:17.776718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.698 [2024-12-14 02:51:17.776737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.698 [2024-12-14 02:51:17.790694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.698 [2024-12-14 02:51:17.790712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.698 [2024-12-14 02:51:17.804660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.698 [2024-12-14 02:51:17.804678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.698 [2024-12-14 02:51:17.818482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.698 [2024-12-14 02:51:17.818500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.958 [2024-12-14 02:51:17.832154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.958 [2024-12-14 02:51:17.832172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.958 [2024-12-14 02:51:17.845440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.958 [2024-12-14 02:51:17.845462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.958 [2024-12-14 02:51:17.859126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.958 [2024-12-14 02:51:17.859144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.958 17106.80 IOPS, 133.65 MiB/s [2024-12-14T01:51:18.091Z] [2024-12-14 02:51:17.871833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.958 [2024-12-14 02:51:17.871851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.958 00:10:02.958 Latency(us) 00:10:02.958 [2024-12-14T01:51:18.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.958 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:02.958 Nvme1n1 : 5.01 17109.89 133.67 0.00 0.00 7473.78 2980.33 14542.75 00:10:02.958 [2024-12-14T01:51:18.091Z] =================================================================================================================== 00:10:02.958 [2024-12-14T01:51:18.091Z] Total : 17109.89 133.67 0.00 0.00 7473.78 2980.33 14542.75 00:10:02.958 [2024-12-14 02:51:17.881042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.958 [2024-12-14 02:51:17.881057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.958 [2024-12-14 02:51:17.893075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.958 [2024-12-14 02:51:17.893090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.958 [2024-12-14 02:51:17.905114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.958 [2024-12-14 02:51:17.905133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.958 [2024-12-14 02:51:17.917143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.958 [2024-12-14 02:51:17.917159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.958 [2024-12-14 02:51:17.929175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.958 [2024-12-14 02:51:17.929191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.958 [2024-12-14 02:51:17.941203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.958 [2024-12-14 02:51:17.941216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.958 [2024-12-14 02:51:17.953238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.958 [2024-12-14 02:51:17.953252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.958 [2024-12-14 02:51:17.965266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.958 [2024-12-14 02:51:17.965280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.958 [2024-12-14 02:51:17.977299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.958 [2024-12-14 02:51:17.977316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.958 [2024-12-14 02:51:17.989329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.958 [2024-12-14 02:51:17.989354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.958 [2024-12-14 02:51:18.001380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.958 [2024-12-14 02:51:18.001393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.958 [2024-12-14 02:51:18.013405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.958 [2024-12-14 02:51:18.013416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.958 [2024-12-14 02:51:18.025435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.958 [2024-12-14 02:51:18.025446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (176456) - No such process 00:10:02.958 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 176456 00:10:02.958 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.958 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.958 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.958 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.958 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:02.958 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.958 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.958 delay0 00:10:02.958 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.958 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:02.958 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.958 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.958 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.958 02:51:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:03.217 [2024-12-14 02:51:18.181246] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:09.786 Initializing NVMe Controllers 00:10:09.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:09.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:09.786 Initialization complete. Launching workers. 00:10:09.786 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 104 00:10:09.786 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 391, failed to submit 33 00:10:09.786 success 206, unsuccessful 185, failed 0 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:09.786 rmmod nvme_tcp 00:10:09.786 rmmod nvme_fabrics 00:10:09.786 rmmod nvme_keyring 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 174190 ']' 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 174190 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 174190 ']' 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 174190 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 174190 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 174190' 00:10:09.786 killing process with pid 174190 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 174190 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 174190 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.786 02:51:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.692 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:11.692 00:10:11.692 real 0m31.375s 00:10:11.692 user 0m43.180s 00:10:11.692 sys 0m9.835s 00:10:11.692 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.692 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.692 ************************************ 00:10:11.692 END TEST nvmf_zcopy 00:10:11.692 ************************************ 00:10:11.692 02:51:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:11.692 02:51:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:11.692 02:51:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.692 02:51:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:11.692 ************************************ 00:10:11.692 START TEST nvmf_nmic 00:10:11.692 ************************************ 00:10:11.692 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:11.692 * Looking for test storage... 00:10:11.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.692 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:11.692 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:11.692 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:11.952 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:11.952 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.952 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.952 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.952 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.952 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.952 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.952 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.952 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.952 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.952 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.952 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.952 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:11.952 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:11.952 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.952 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.952 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:11.952 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:11.952 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.952 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:11.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.953 --rc genhtml_branch_coverage=1 00:10:11.953 --rc genhtml_function_coverage=1 00:10:11.953 --rc genhtml_legend=1 00:10:11.953 --rc geninfo_all_blocks=1 00:10:11.953 --rc geninfo_unexecuted_blocks=1 00:10:11.953 00:10:11.953 ' 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:11.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.953 --rc genhtml_branch_coverage=1 00:10:11.953 --rc genhtml_function_coverage=1 00:10:11.953 --rc genhtml_legend=1 00:10:11.953 --rc geninfo_all_blocks=1 00:10:11.953 --rc geninfo_unexecuted_blocks=1 00:10:11.953 00:10:11.953 ' 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:11.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.953 --rc genhtml_branch_coverage=1 00:10:11.953 --rc genhtml_function_coverage=1 00:10:11.953 --rc genhtml_legend=1 00:10:11.953 --rc geninfo_all_blocks=1 00:10:11.953 --rc geninfo_unexecuted_blocks=1 00:10:11.953 00:10:11.953 ' 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:11.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.953 --rc genhtml_branch_coverage=1 00:10:11.953 --rc genhtml_function_coverage=1 00:10:11.953 --rc genhtml_legend=1 00:10:11.953 --rc geninfo_all_blocks=1 00:10:11.953 --rc geninfo_unexecuted_blocks=1 00:10:11.953 00:10:11.953 ' 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:11.953 02:51:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.535 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:18.535 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:18.535 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:18.535 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:18.535 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:18.535 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:18.535 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:18.535 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:18.535 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:18.535 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:18.535 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:18.535 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:18.535 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:18.535 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:18.535 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:18.535 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:18.535 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:18.535 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:18.535 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:18.535 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:18.536 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:18.536 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:18.536 Found net devices under 0000:af:00.0: cvl_0_0 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:18.536 Found net devices under 0000:af:00.1: cvl_0_1 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:18.536 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:18.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:10:18.537 00:10:18.537 --- 10.0.0.2 ping statistics --- 00:10:18.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.537 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:18.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:10:18.537 00:10:18.537 --- 10.0.0.1 ping statistics --- 00:10:18.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.537 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=181941 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 181941 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 181941 ']' 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.537 02:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.537 [2024-12-14 02:51:32.887495] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:18.537 [2024-12-14 02:51:32.887539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.537 [2024-12-14 02:51:32.969976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.537 [2024-12-14 02:51:32.993636] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.537 [2024-12-14 02:51:32.993673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.537 [2024-12-14 02:51:32.993683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.537 [2024-12-14 02:51:32.993692] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.537 [2024-12-14 02:51:32.993698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.537 [2024-12-14 02:51:32.995159] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.537 [2024-12-14 02:51:32.995268] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.537 [2024-12-14 02:51:32.995282] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.537 [2024-12-14 02:51:32.995291] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.537 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.537 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:18.537 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.538 [2024-12-14 02:51:33.127442] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.538 Malloc0 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.538 [2024-12-14 02:51:33.189141] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:18.538 test case1: single bdev can't be used in multiple subsystems 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.538 [2024-12-14 02:51:33.217048] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:18.538 [2024-12-14 02:51:33.217070] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:18.538 [2024-12-14 02:51:33.217079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.538 request: 00:10:18.538 { 00:10:18.538 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:18.538 "namespace": { 00:10:18.538 "bdev_name": "Malloc0", 00:10:18.538 "no_auto_visible": false, 00:10:18.538 "hide_metadata": false 00:10:18.538 }, 00:10:18.538 "method": "nvmf_subsystem_add_ns", 00:10:18.538 "req_id": 1 00:10:18.538 } 00:10:18.538 Got JSON-RPC error response 00:10:18.538 response: 00:10:18.538 { 00:10:18.538 "code": -32602, 00:10:18.538 "message": "Invalid parameters" 00:10:18.538 } 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:18.538 Adding namespace failed - expected result. 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:18.538 test case2: host connect to nvmf target in multiple paths 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.538 [2024-12-14 02:51:33.229185] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:18.538 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.539 02:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:19.478 02:51:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:20.414 02:51:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:20.414 02:51:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:20.414 02:51:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:20.414 02:51:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:20.414 02:51:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:22.949 02:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:22.949 02:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:22.949 02:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:22.949 02:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:22.949 02:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:22.949 02:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:22.949 02:51:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:22.949 [global] 00:10:22.949 thread=1 00:10:22.949 invalidate=1 00:10:22.949 rw=write 00:10:22.949 time_based=1 00:10:22.949 runtime=1 00:10:22.949 ioengine=libaio 00:10:22.949 direct=1 00:10:22.949 bs=4096 00:10:22.949 iodepth=1 00:10:22.949 norandommap=0 00:10:22.949 numjobs=1 00:10:22.949 00:10:22.949 verify_dump=1 00:10:22.949 verify_backlog=512 00:10:22.949 verify_state_save=0 00:10:22.949 do_verify=1 00:10:22.949 verify=crc32c-intel 00:10:22.949 [job0] 00:10:22.949 filename=/dev/nvme0n1 00:10:22.949 Could not set queue depth (nvme0n1) 00:10:22.949 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.949 fio-3.35 00:10:22.949 Starting 1 thread 00:10:24.328 00:10:24.328 job0: (groupid=0, jobs=1): err= 0: pid=182809: Sat Dec 14 02:51:39 2024 00:10:24.328 read: IOPS=2548, BW=9.95MiB/s (10.4MB/s)(9.96MiB/1001msec) 00:10:24.328 slat (nsec): min=6741, max=33962, avg=7790.13, stdev=1195.79 00:10:24.328 clat (usec): min=169, max=309, avg=228.90, stdev=20.67 00:10:24.328 lat (usec): min=177, max=317, avg=236.69, stdev=20.72 00:10:24.328 clat percentiles (usec): 00:10:24.328 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 208], 20.00th=[ 212], 00:10:24.328 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 227], 00:10:24.328 | 70.00th=[ 241], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 265], 00:10:24.328 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 285], 99.95th=[ 285], 00:10:24.328 | 99.99th=[ 310] 00:10:24.328 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:24.328 slat (nsec): min=9831, max=45743, avg=10936.27, stdev=1743.67 00:10:24.328 clat (usec): min=106, max=228, avg=137.79, stdev=18.03 00:10:24.328 lat (usec): min=121, max=263, avg=148.72, stdev=18.43 00:10:24.328 clat percentiles (usec): 00:10:24.328 | 1.00th=[ 117], 5.00th=[ 120], 10.00th=[ 121], 20.00th=[ 123], 00:10:24.328 | 30.00th=[ 125], 40.00th=[ 126], 50.00th=[ 128], 60.00th=[ 135], 00:10:24.328 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 167], 00:10:24.328 | 99.00th=[ 176], 99.50th=[ 178], 99.90th=[ 182], 99.95th=[ 219], 00:10:24.328 | 99.99th=[ 229] 00:10:24.328 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:10:24.328 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:24.328 lat (usec) : 250=88.38%, 500=11.62% 00:10:24.328 cpu : usr=3.30%, sys=8.70%, ctx=5111, majf=0, minf=1 00:10:24.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:24.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.328 issued rwts: total=2551,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:24.328 00:10:24.328 Run status group 0 (all jobs): 00:10:24.328 READ: bw=9.95MiB/s (10.4MB/s), 9.95MiB/s-9.95MiB/s (10.4MB/s-10.4MB/s), io=9.96MiB (10.4MB), run=1001-1001msec 00:10:24.328 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:10:24.328 00:10:24.328 Disk stats (read/write): 00:10:24.328 nvme0n1: ios=2207/2560, merge=0/0, ticks=485/320, in_queue=805, util=90.98% 00:10:24.329 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:24.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:24.329 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:24.329 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:24.329 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:24.329 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.329 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:24.329 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.329 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:24.329 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:24.329 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:24.329 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:24.329 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:24.329 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:24.329 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:24.329 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:24.329 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:24.329 rmmod nvme_tcp 00:10:24.329 rmmod nvme_fabrics 00:10:24.588 rmmod nvme_keyring 00:10:24.588 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:24.588 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:24.588 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:24.588 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 181941 ']' 00:10:24.588 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 181941 00:10:24.588 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 181941 ']' 00:10:24.588 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 181941 00:10:24.588 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:24.588 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:24.588 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 181941 00:10:24.588 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:24.588 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:24.588 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 181941' 00:10:24.588 killing process with pid 181941 00:10:24.588 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 181941 00:10:24.588 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 181941 00:10:24.848 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:24.848 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:24.848 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:24.848 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:24.848 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:24.848 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:24.848 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:24.848 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:24.848 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:24.848 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.848 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.848 02:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.756 02:51:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:26.756 00:10:26.756 real 0m15.124s 00:10:26.756 user 0m33.815s 00:10:26.756 sys 0m5.618s 00:10:26.756 02:51:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.756 02:51:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:26.756 ************************************ 00:10:26.756 END TEST nvmf_nmic 00:10:26.756 ************************************ 00:10:26.756 02:51:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:26.756 02:51:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:26.756 02:51:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.756 02:51:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:26.756 ************************************ 00:10:26.756 START TEST nvmf_fio_target 00:10:26.756 ************************************ 00:10:26.756 02:51:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:27.016 * Looking for test storage... 00:10:27.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:27.016 02:51:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:27.016 02:51:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:27.016 02:51:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:27.016 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:27.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.017 --rc genhtml_branch_coverage=1 00:10:27.017 --rc genhtml_function_coverage=1 00:10:27.017 --rc genhtml_legend=1 00:10:27.017 --rc geninfo_all_blocks=1 00:10:27.017 --rc geninfo_unexecuted_blocks=1 00:10:27.017 00:10:27.017 ' 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:27.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.017 --rc genhtml_branch_coverage=1 00:10:27.017 --rc genhtml_function_coverage=1 00:10:27.017 --rc genhtml_legend=1 00:10:27.017 --rc geninfo_all_blocks=1 00:10:27.017 --rc geninfo_unexecuted_blocks=1 00:10:27.017 00:10:27.017 ' 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:27.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.017 --rc genhtml_branch_coverage=1 00:10:27.017 --rc genhtml_function_coverage=1 00:10:27.017 --rc genhtml_legend=1 00:10:27.017 --rc geninfo_all_blocks=1 00:10:27.017 --rc geninfo_unexecuted_blocks=1 00:10:27.017 00:10:27.017 ' 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:27.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.017 --rc genhtml_branch_coverage=1 00:10:27.017 --rc genhtml_function_coverage=1 00:10:27.017 --rc genhtml_legend=1 00:10:27.017 --rc geninfo_all_blocks=1 00:10:27.017 --rc geninfo_unexecuted_blocks=1 00:10:27.017 00:10:27.017 ' 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:27.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:27.017 02:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:33.593 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:33.593 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.593 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:33.594 Found net devices under 0000:af:00.0: cvl_0_0 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:33.594 Found net devices under 0000:af:00.1: cvl_0_1 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:33.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:33.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:10:33.594 00:10:33.594 --- 10.0.0.2 ping statistics --- 00:10:33.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.594 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:33.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:33.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:10:33.594 00:10:33.594 --- 10.0.0.1 ping statistics --- 00:10:33.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.594 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:33.594 02:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:33.594 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:33.594 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:33.594 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:33.594 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.594 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=186655 00:10:33.594 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:33.594 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 186655 00:10:33.594 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 186655 ']' 00:10:33.594 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.594 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.594 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.594 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.594 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.594 [2024-12-14 02:51:48.069640] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:33.594 [2024-12-14 02:51:48.069685] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.594 [2024-12-14 02:51:48.150301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:33.594 [2024-12-14 02:51:48.172280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.594 [2024-12-14 02:51:48.172322] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.594 [2024-12-14 02:51:48.172332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:33.594 [2024-12-14 02:51:48.172353] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:33.594 [2024-12-14 02:51:48.172358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.594 [2024-12-14 02:51:48.173750] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.594 [2024-12-14 02:51:48.173860] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.594 [2024-12-14 02:51:48.173970] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.594 [2024-12-14 02:51:48.173971] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.594 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.594 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:33.594 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:33.594 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:33.594 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.594 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.594 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:33.594 [2024-12-14 02:51:48.469738] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.594 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:33.594 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:33.594 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:33.854 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:33.854 02:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.113 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:34.113 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.372 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:34.372 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:34.631 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.891 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:34.891 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.891 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:34.891 02:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:35.150 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:35.150 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:35.409 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:35.668 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:35.668 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:35.928 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:35.928 02:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:35.928 02:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:36.187 [2024-12-14 02:51:51.184074] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:36.187 02:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:36.446 02:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:36.706 02:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:37.644 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:37.644 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:37.645 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:37.645 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:37.645 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:37.645 02:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:40.181 02:51:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:40.181 02:51:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:40.181 02:51:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:40.181 02:51:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:40.181 02:51:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:40.181 02:51:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:40.181 02:51:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:40.181 [global] 00:10:40.181 thread=1 00:10:40.181 invalidate=1 00:10:40.181 rw=write 00:10:40.181 time_based=1 00:10:40.181 runtime=1 00:10:40.181 ioengine=libaio 00:10:40.181 direct=1 00:10:40.181 bs=4096 00:10:40.181 iodepth=1 00:10:40.181 norandommap=0 00:10:40.181 numjobs=1 00:10:40.181 00:10:40.181 verify_dump=1 00:10:40.181 verify_backlog=512 00:10:40.181 verify_state_save=0 00:10:40.181 do_verify=1 00:10:40.181 verify=crc32c-intel 00:10:40.181 [job0] 00:10:40.181 filename=/dev/nvme0n1 00:10:40.181 [job1] 00:10:40.181 filename=/dev/nvme0n2 00:10:40.181 [job2] 00:10:40.181 filename=/dev/nvme0n3 00:10:40.181 [job3] 00:10:40.181 filename=/dev/nvme0n4 00:10:40.181 Could not set queue depth (nvme0n1) 00:10:40.181 Could not set queue depth (nvme0n2) 00:10:40.181 Could not set queue depth (nvme0n3) 00:10:40.181 Could not set queue depth (nvme0n4) 00:10:40.181 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.181 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.181 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.181 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.181 fio-3.35 00:10:40.181 Starting 4 threads 00:10:41.560 00:10:41.560 job0: (groupid=0, jobs=1): err= 0: pid=188028: Sat Dec 14 02:51:56 2024 00:10:41.560 read: IOPS=52, BW=209KiB/s (214kB/s)(212KiB/1013msec) 00:10:41.560 slat (nsec): min=6772, max=24886, avg=13492.45, stdev=7420.92 00:10:41.560 clat (usec): min=196, max=41260, avg=17187.44, stdev=20234.42 00:10:41.560 lat (usec): min=203, max=41275, avg=17200.93, stdev=20241.35 00:10:41.560 clat percentiles (usec): 00:10:41.560 | 1.00th=[ 196], 5.00th=[ 212], 10.00th=[ 229], 20.00th=[ 277], 00:10:41.560 | 30.00th=[ 293], 40.00th=[ 338], 50.00th=[ 375], 60.00th=[40633], 00:10:41.560 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:41.560 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:41.560 | 99.99th=[41157] 00:10:41.560 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:10:41.560 slat (nsec): min=9117, max=85721, avg=11284.39, stdev=3721.78 00:10:41.560 clat (usec): min=134, max=331, avg=182.84, stdev=30.94 00:10:41.560 lat (usec): min=145, max=417, avg=194.13, stdev=31.18 00:10:41.560 clat percentiles (usec): 00:10:41.560 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 155], 00:10:41.560 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 182], 00:10:41.560 | 70.00th=[ 204], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 235], 00:10:41.560 | 99.00th=[ 249], 99.50th=[ 251], 99.90th=[ 334], 99.95th=[ 334], 00:10:41.560 | 99.99th=[ 334] 00:10:41.560 bw ( KiB/s): min= 4096, max= 4096, per=29.14%, avg=4096.00, stdev= 0.00, samples=1 00:10:41.560 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:41.560 lat (usec) : 250=90.97%, 500=5.13% 00:10:41.560 lat (msec) : 50=3.89% 00:10:41.560 cpu : usr=0.49%, sys=0.69%, ctx=566, majf=0, minf=1 00:10:41.560 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.560 issued rwts: total=53,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.560 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.560 job1: (groupid=0, jobs=1): err= 0: pid=188029: Sat Dec 14 02:51:56 2024 00:10:41.560 read: IOPS=28, BW=114KiB/s (117kB/s)(116KiB/1015msec) 00:10:41.560 slat (nsec): min=6821, max=32050, avg=18838.24, stdev=6439.63 00:10:41.560 clat (usec): min=220, max=42028, avg=31593.94, stdev=17976.44 00:10:41.560 lat (usec): min=227, max=42050, avg=31612.78, stdev=17979.74 00:10:41.560 clat percentiles (usec): 00:10:41.560 | 1.00th=[ 221], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 371], 00:10:41.560 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:10:41.560 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:41.560 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:41.560 | 99.99th=[42206] 00:10:41.560 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:10:41.560 slat (nsec): min=8967, max=42034, avg=10150.61, stdev=1629.47 00:10:41.560 clat (usec): min=122, max=290, avg=178.21, stdev=34.77 00:10:41.560 lat (usec): min=132, max=332, avg=188.36, stdev=35.06 00:10:41.560 clat percentiles (usec): 00:10:41.560 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 147], 00:10:41.560 | 30.00th=[ 151], 40.00th=[ 157], 50.00th=[ 165], 60.00th=[ 184], 00:10:41.560 | 70.00th=[ 206], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 235], 00:10:41.560 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[ 289], 99.95th=[ 289], 00:10:41.560 | 99.99th=[ 289] 00:10:41.560 bw ( KiB/s): min= 4096, max= 4096, per=29.14%, avg=4096.00, stdev= 0.00, samples=1 00:10:41.560 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:41.560 lat (usec) : 250=94.45%, 500=1.48% 00:10:41.560 lat (msec) : 50=4.07% 00:10:41.560 cpu : usr=0.20%, sys=0.49%, ctx=541, majf=0, minf=1 00:10:41.560 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.560 issued rwts: total=29,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.560 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.560 job2: (groupid=0, jobs=1): err= 0: pid=188030: Sat Dec 14 02:51:56 2024 00:10:41.560 read: IOPS=74, BW=298KiB/s (305kB/s)(304KiB/1020msec) 00:10:41.560 slat (nsec): min=7741, max=23125, avg=12937.13, stdev=5823.38 00:10:41.560 clat (usec): min=199, max=41503, avg=12077.29, stdev=18600.20 00:10:41.560 lat (usec): min=210, max=41512, avg=12090.23, stdev=18600.02 00:10:41.560 clat percentiles (usec): 00:10:41.560 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 225], 00:10:41.560 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 260], 00:10:41.560 | 70.00th=[ 1811], 80.00th=[40633], 90.00th=[41157], 95.00th=[41681], 00:10:41.560 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:41.560 | 99.99th=[41681] 00:10:41.560 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:10:41.560 slat (nsec): min=9202, max=53960, avg=12666.60, stdev=3213.79 00:10:41.560 clat (usec): min=143, max=409, avg=180.15, stdev=24.70 00:10:41.560 lat (usec): min=153, max=428, avg=192.82, stdev=25.88 00:10:41.560 clat percentiles (usec): 00:10:41.560 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:10:41.560 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 182], 00:10:41.560 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 204], 95.00th=[ 221], 00:10:41.560 | 99.00th=[ 251], 99.50th=[ 343], 99.90th=[ 412], 99.95th=[ 412], 00:10:41.560 | 99.99th=[ 412] 00:10:41.560 bw ( KiB/s): min= 4096, max= 4096, per=29.14%, avg=4096.00, stdev= 0.00, samples=1 00:10:41.560 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:41.560 lat (usec) : 250=92.86%, 500=3.06%, 1000=0.17% 00:10:41.560 lat (msec) : 2=0.17%, 50=3.74% 00:10:41.560 cpu : usr=0.59%, sys=0.88%, ctx=588, majf=0, minf=2 00:10:41.560 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.560 issued rwts: total=76,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.560 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.560 job3: (groupid=0, jobs=1): err= 0: pid=188031: Sat Dec 14 02:51:56 2024 00:10:41.560 read: IOPS=1971, BW=7884KiB/s (8073kB/s)(7892KiB/1001msec) 00:10:41.560 slat (nsec): min=6405, max=26896, avg=7187.62, stdev=1070.87 00:10:41.560 clat (usec): min=172, max=41229, avg=326.63, stdev=1926.86 00:10:41.560 lat (usec): min=179, max=41236, avg=333.82, stdev=1926.86 00:10:41.560 clat percentiles (usec): 00:10:41.560 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 206], 00:10:41.560 | 30.00th=[ 212], 40.00th=[ 227], 50.00th=[ 237], 60.00th=[ 243], 00:10:41.560 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 265], 00:10:41.560 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[41157], 99.95th=[41157], 00:10:41.560 | 99.99th=[41157] 00:10:41.560 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:41.560 slat (nsec): min=8194, max=58816, avg=10541.83, stdev=2141.31 00:10:41.560 clat (usec): min=112, max=434, avg=151.65, stdev=31.39 00:10:41.560 lat (usec): min=122, max=458, avg=162.19, stdev=32.24 00:10:41.560 clat percentiles (usec): 00:10:41.560 | 1.00th=[ 119], 5.00th=[ 124], 10.00th=[ 127], 20.00th=[ 131], 00:10:41.560 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 143], 00:10:41.560 | 70.00th=[ 153], 80.00th=[ 178], 90.00th=[ 208], 95.00th=[ 223], 00:10:41.560 | 99.00th=[ 237], 99.50th=[ 243], 99.90th=[ 281], 99.95th=[ 363], 00:10:41.560 | 99.99th=[ 437] 00:10:41.560 bw ( KiB/s): min=12224, max=12224, per=86.97%, avg=12224.00, stdev= 0.00, samples=1 00:10:41.560 iops : min= 3056, max= 3056, avg=3056.00, stdev= 0.00, samples=1 00:10:41.560 lat (usec) : 250=87.17%, 500=12.71% 00:10:41.560 lat (msec) : 50=0.12% 00:10:41.560 cpu : usr=1.50%, sys=4.40%, ctx=4021, majf=0, minf=1 00:10:41.560 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.560 issued rwts: total=1973,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.560 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.560 00:10:41.560 Run status group 0 (all jobs): 00:10:41.560 READ: bw=8357KiB/s (8557kB/s), 114KiB/s-7884KiB/s (117kB/s-8073kB/s), io=8524KiB (8729kB), run=1001-1020msec 00:10:41.560 WRITE: bw=13.7MiB/s (14.4MB/s), 2008KiB/s-8184KiB/s (2056kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1020msec 00:10:41.560 00:10:41.560 Disk stats (read/write): 00:10:41.560 nvme0n1: ios=99/512, merge=0/0, ticks=775/85, in_queue=860, util=86.47% 00:10:41.560 nvme0n2: ios=18/512, merge=0/0, ticks=751/92, in_queue=843, util=86.56% 00:10:41.560 nvme0n3: ios=69/512, merge=0/0, ticks=714/85, in_queue=799, util=88.91% 00:10:41.560 nvme0n4: ios=1648/2048, merge=0/0, ticks=463/296, in_queue=759, util=89.57% 00:10:41.561 02:51:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:41.561 [global] 00:10:41.561 thread=1 00:10:41.561 invalidate=1 00:10:41.561 rw=randwrite 00:10:41.561 time_based=1 00:10:41.561 runtime=1 00:10:41.561 ioengine=libaio 00:10:41.561 direct=1 00:10:41.561 bs=4096 00:10:41.561 iodepth=1 00:10:41.561 norandommap=0 00:10:41.561 numjobs=1 00:10:41.561 00:10:41.561 verify_dump=1 00:10:41.561 verify_backlog=512 00:10:41.561 verify_state_save=0 00:10:41.561 do_verify=1 00:10:41.561 verify=crc32c-intel 00:10:41.561 [job0] 00:10:41.561 filename=/dev/nvme0n1 00:10:41.561 [job1] 00:10:41.561 filename=/dev/nvme0n2 00:10:41.561 [job2] 00:10:41.561 filename=/dev/nvme0n3 00:10:41.561 [job3] 00:10:41.561 filename=/dev/nvme0n4 00:10:41.561 Could not set queue depth (nvme0n1) 00:10:41.561 Could not set queue depth (nvme0n2) 00:10:41.561 Could not set queue depth (nvme0n3) 00:10:41.561 Could not set queue depth (nvme0n4) 00:10:41.561 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.561 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.561 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.561 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.561 fio-3.35 00:10:41.561 Starting 4 threads 00:10:42.940 00:10:42.940 job0: (groupid=0, jobs=1): err= 0: pid=188400: Sat Dec 14 02:51:57 2024 00:10:42.940 read: IOPS=542, BW=2172KiB/s (2224kB/s)(2224KiB/1024msec) 00:10:42.940 slat (nsec): min=7039, max=24438, avg=9154.18, stdev=2663.41 00:10:42.940 clat (usec): min=192, max=42110, avg=1491.34, stdev=7019.30 00:10:42.940 lat (usec): min=201, max=42133, avg=1500.49, stdev=7021.01 00:10:42.940 clat percentiles (usec): 00:10:42.940 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 223], 00:10:42.940 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 239], 00:10:42.940 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 302], 95.00th=[ 420], 00:10:42.940 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:42.940 | 99.99th=[42206] 00:10:42.940 write: IOPS=1000, BW=4000KiB/s (4096kB/s)(4096KiB/1024msec); 0 zone resets 00:10:42.940 slat (nsec): min=10535, max=41155, avg=12162.16, stdev=1627.87 00:10:42.940 clat (usec): min=130, max=311, avg=167.58, stdev=26.00 00:10:42.940 lat (usec): min=142, max=333, avg=179.75, stdev=26.14 00:10:42.940 clat percentiles (usec): 00:10:42.940 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:10:42.940 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:10:42.940 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 215], 95.00th=[ 227], 00:10:42.940 | 99.00th=[ 249], 99.50th=[ 265], 99.90th=[ 310], 99.95th=[ 314], 00:10:42.940 | 99.99th=[ 314] 00:10:42.940 bw ( KiB/s): min= 8192, max= 8192, per=29.26%, avg=8192.00, stdev= 0.00, samples=1 00:10:42.940 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:42.940 lat (usec) : 250=91.90%, 500=7.03% 00:10:42.940 lat (msec) : 50=1.08% 00:10:42.940 cpu : usr=1.08%, sys=1.76%, ctx=1582, majf=0, minf=1 00:10:42.940 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.940 issued rwts: total=556,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.940 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.940 job1: (groupid=0, jobs=1): err= 0: pid=188401: Sat Dec 14 02:51:57 2024 00:10:42.940 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:42.940 slat (nsec): min=6997, max=22494, avg=8083.07, stdev=1136.01 00:10:42.940 clat (usec): min=183, max=1239, avg=250.99, stdev=56.70 00:10:42.940 lat (usec): min=191, max=1246, avg=259.07, stdev=56.70 00:10:42.940 clat percentiles (usec): 00:10:42.940 | 1.00th=[ 192], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 217], 00:10:42.940 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 247], 00:10:42.940 | 70.00th=[ 258], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 347], 00:10:42.940 | 99.00th=[ 490], 99.50th=[ 506], 99.90th=[ 570], 99.95th=[ 668], 00:10:42.940 | 99.99th=[ 1237] 00:10:42.940 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:42.940 slat (nsec): min=10246, max=43084, avg=11648.07, stdev=1951.71 00:10:42.940 clat (usec): min=114, max=1241, avg=166.38, stdev=33.25 00:10:42.940 lat (usec): min=125, max=1255, avg=178.03, stdev=33.43 00:10:42.940 clat percentiles (usec): 00:10:42.940 | 1.00th=[ 126], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 145], 00:10:42.940 | 30.00th=[ 149], 40.00th=[ 155], 50.00th=[ 161], 60.00th=[ 167], 00:10:42.940 | 70.00th=[ 178], 80.00th=[ 190], 90.00th=[ 202], 95.00th=[ 215], 00:10:42.940 | 99.00th=[ 235], 99.50th=[ 245], 99.90th=[ 285], 99.95th=[ 285], 00:10:42.940 | 99.99th=[ 1237] 00:10:42.940 bw ( KiB/s): min= 8296, max= 8296, per=29.63%, avg=8296.00, stdev= 0.00, samples=1 00:10:42.940 iops : min= 2074, max= 2074, avg=2074.00, stdev= 0.00, samples=1 00:10:42.940 lat (usec) : 250=83.29%, 500=16.41%, 750=0.26% 00:10:42.940 lat (msec) : 2=0.04% 00:10:42.940 cpu : usr=4.90%, sys=6.30%, ctx=4609, majf=0, minf=1 00:10:42.940 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.940 issued rwts: total=2048,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.940 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.940 job2: (groupid=0, jobs=1): err= 0: pid=188402: Sat Dec 14 02:51:57 2024 00:10:42.940 read: IOPS=636, BW=2548KiB/s (2609kB/s)(2568KiB/1008msec) 00:10:42.940 slat (nsec): min=6837, max=36545, avg=8269.32, stdev=2660.12 00:10:42.940 clat (usec): min=203, max=41093, avg=1235.34, stdev=6149.28 00:10:42.940 lat (usec): min=211, max=41115, avg=1243.61, stdev=6151.30 00:10:42.940 clat percentiles (usec): 00:10:42.940 | 1.00th=[ 210], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 253], 00:10:42.940 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:10:42.940 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 383], 95.00th=[ 441], 00:10:42.940 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:42.940 | 99.99th=[41157] 00:10:42.940 write: IOPS=1015, BW=4063KiB/s (4161kB/s)(4096KiB/1008msec); 0 zone resets 00:10:42.940 slat (nsec): min=10168, max=50563, avg=11509.74, stdev=2269.50 00:10:42.940 clat (usec): min=129, max=335, avg=188.26, stdev=26.33 00:10:42.940 lat (usec): min=139, max=353, avg=199.77, stdev=26.55 00:10:42.940 clat percentiles (usec): 00:10:42.940 | 1.00th=[ 141], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 165], 00:10:42.940 | 30.00th=[ 172], 40.00th=[ 180], 50.00th=[ 188], 60.00th=[ 194], 00:10:42.940 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 221], 95.00th=[ 231], 00:10:42.940 | 99.00th=[ 262], 99.50th=[ 281], 99.90th=[ 306], 99.95th=[ 334], 00:10:42.940 | 99.99th=[ 334] 00:10:42.940 bw ( KiB/s): min= 8192, max= 8192, per=29.26%, avg=8192.00, stdev= 0.00, samples=1 00:10:42.940 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:42.940 lat (usec) : 250=67.47%, 500=31.51%, 750=0.12% 00:10:42.940 lat (msec) : 50=0.90% 00:10:42.940 cpu : usr=2.18%, sys=1.69%, ctx=1666, majf=0, minf=2 00:10:42.940 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.940 issued rwts: total=642,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.940 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.940 job3: (groupid=0, jobs=1): err= 0: pid=188403: Sat Dec 14 02:51:57 2024 00:10:42.940 read: IOPS=2362, BW=9451KiB/s (9677kB/s)(9460KiB/1001msec) 00:10:42.940 slat (nsec): min=5999, max=29397, avg=8608.48, stdev=1412.49 00:10:42.940 clat (usec): min=180, max=657, avg=220.65, stdev=23.79 00:10:42.940 lat (usec): min=189, max=665, avg=229.26, stdev=24.09 00:10:42.940 clat percentiles (usec): 00:10:42.940 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:10:42.940 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:10:42.940 | 70.00th=[ 227], 80.00th=[ 231], 90.00th=[ 241], 95.00th=[ 251], 00:10:42.941 | 99.00th=[ 297], 99.50th=[ 310], 99.90th=[ 490], 99.95th=[ 553], 00:10:42.941 | 99.99th=[ 660] 00:10:42.941 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:42.941 slat (nsec): min=10307, max=60297, avg=11901.93, stdev=1638.66 00:10:42.941 clat (usec): min=125, max=381, avg=161.67, stdev=15.02 00:10:42.941 lat (usec): min=136, max=392, avg=173.57, stdev=15.33 00:10:42.941 clat percentiles (usec): 00:10:42.941 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 151], 00:10:42.941 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:10:42.941 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 186], 00:10:42.941 | 99.00th=[ 202], 99.50th=[ 208], 99.90th=[ 255], 99.95th=[ 269], 00:10:42.941 | 99.99th=[ 383] 00:10:42.941 bw ( KiB/s): min=11656, max=11656, per=41.63%, avg=11656.00, stdev= 0.00, samples=1 00:10:42.941 iops : min= 2914, max= 2914, avg=2914.00, stdev= 0.00, samples=1 00:10:42.941 lat (usec) : 250=97.50%, 500=2.46%, 750=0.04% 00:10:42.941 cpu : usr=2.30%, sys=5.90%, ctx=4925, majf=0, minf=2 00:10:42.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.941 issued rwts: total=2365,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.941 00:10:42.941 Run status group 0 (all jobs): 00:10:42.941 READ: bw=21.4MiB/s (22.4MB/s), 2172KiB/s-9451KiB/s (2224kB/s-9677kB/s), io=21.9MiB (23.0MB), run=1001-1024msec 00:10:42.941 WRITE: bw=27.3MiB/s (28.7MB/s), 4000KiB/s-9.99MiB/s (4096kB/s-10.5MB/s), io=28.0MiB (29.4MB), run=1001-1024msec 00:10:42.941 00:10:42.941 Disk stats (read/write): 00:10:42.941 nvme0n1: ios=586/1024, merge=0/0, ticks=1571/174, in_queue=1745, util=86.07% 00:10:42.941 nvme0n2: ios=1823/2048, merge=0/0, ticks=1338/320, in_queue=1658, util=90.15% 00:10:42.941 nvme0n3: ios=695/1024, merge=0/0, ticks=700/187, in_queue=887, util=94.70% 00:10:42.941 nvme0n4: ios=2105/2177, merge=0/0, ticks=513/330, in_queue=843, util=95.29% 00:10:42.941 02:51:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:42.941 [global] 00:10:42.941 thread=1 00:10:42.941 invalidate=1 00:10:42.941 rw=write 00:10:42.941 time_based=1 00:10:42.941 runtime=1 00:10:42.941 ioengine=libaio 00:10:42.941 direct=1 00:10:42.941 bs=4096 00:10:42.941 iodepth=128 00:10:42.941 norandommap=0 00:10:42.941 numjobs=1 00:10:42.941 00:10:42.941 verify_dump=1 00:10:42.941 verify_backlog=512 00:10:42.941 verify_state_save=0 00:10:42.941 do_verify=1 00:10:42.941 verify=crc32c-intel 00:10:42.941 [job0] 00:10:42.941 filename=/dev/nvme0n1 00:10:42.941 [job1] 00:10:42.941 filename=/dev/nvme0n2 00:10:42.941 [job2] 00:10:42.941 filename=/dev/nvme0n3 00:10:42.941 [job3] 00:10:42.941 filename=/dev/nvme0n4 00:10:42.941 Could not set queue depth (nvme0n1) 00:10:42.941 Could not set queue depth (nvme0n2) 00:10:42.941 Could not set queue depth (nvme0n3) 00:10:42.941 Could not set queue depth (nvme0n4) 00:10:43.199 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.199 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.199 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.199 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.199 fio-3.35 00:10:43.199 Starting 4 threads 00:10:44.579 00:10:44.579 job0: (groupid=0, jobs=1): err= 0: pid=188765: Sat Dec 14 02:51:59 2024 00:10:44.579 read: IOPS=5332, BW=20.8MiB/s (21.8MB/s)(21.0MiB/1006msec) 00:10:44.579 slat (nsec): min=1418, max=11150k, avg=91257.15, stdev=649471.33 00:10:44.579 clat (usec): min=2990, max=25144, avg=11097.97, stdev=3312.94 00:10:44.579 lat (usec): min=3360, max=25156, avg=11189.23, stdev=3356.40 00:10:44.579 clat percentiles (usec): 00:10:44.579 | 1.00th=[ 4293], 5.00th=[ 7308], 10.00th=[ 8029], 20.00th=[ 8979], 00:10:44.579 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10290], 60.00th=[10683], 00:10:44.579 | 70.00th=[11207], 80.00th=[13173], 90.00th=[16057], 95.00th=[17957], 00:10:44.579 | 99.00th=[22676], 99.50th=[23200], 99.90th=[24249], 99.95th=[25035], 00:10:44.579 | 99.99th=[25035] 00:10:44.579 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:10:44.579 slat (usec): min=2, max=8223, avg=85.34, stdev=402.76 00:10:44.579 clat (usec): min=1517, max=43206, avg=12070.47, stdev=7373.78 00:10:44.579 lat (usec): min=1533, max=43219, avg=12155.81, stdev=7425.45 00:10:44.579 clat percentiles (usec): 00:10:44.579 | 1.00th=[ 3228], 5.00th=[ 4621], 10.00th=[ 6325], 20.00th=[ 8160], 00:10:44.579 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:10:44.579 | 70.00th=[10421], 80.00th=[13042], 90.00th=[20317], 95.00th=[31065], 00:10:44.579 | 99.00th=[39584], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:10:44.579 | 99.99th=[43254] 00:10:44.579 bw ( KiB/s): min=21872, max=23184, per=33.42%, avg=22528.00, stdev=927.72, samples=2 00:10:44.579 iops : min= 5468, max= 5796, avg=5632.00, stdev=231.93, samples=2 00:10:44.579 lat (msec) : 2=0.02%, 4=1.93%, 10=38.98%, 20=52.60%, 50=6.48% 00:10:44.579 cpu : usr=4.78%, sys=5.67%, ctx=718, majf=0, minf=1 00:10:44.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:44.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.579 issued rwts: total=5364,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.579 job1: (groupid=0, jobs=1): err= 0: pid=188766: Sat Dec 14 02:51:59 2024 00:10:44.579 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:10:44.579 slat (nsec): min=1100, max=27629k, avg=122924.25, stdev=1043694.03 00:10:44.579 clat (usec): min=3218, max=58146, avg=16103.54, stdev=9986.60 00:10:44.579 lat (usec): min=3225, max=58274, avg=16226.46, stdev=10046.39 00:10:44.579 clat percentiles (usec): 00:10:44.579 | 1.00th=[ 5276], 5.00th=[ 6325], 10.00th=[ 6652], 20.00th=[ 8356], 00:10:44.579 | 30.00th=[ 9503], 40.00th=[10421], 50.00th=[10945], 60.00th=[13698], 00:10:44.579 | 70.00th=[22676], 80.00th=[25297], 90.00th=[30278], 95.00th=[33817], 00:10:44.579 | 99.00th=[46924], 99.50th=[46924], 99.90th=[46924], 99.95th=[57934], 00:10:44.579 | 99.99th=[57934] 00:10:44.579 write: IOPS=3847, BW=15.0MiB/s (15.8MB/s)(15.1MiB/1006msec); 0 zone resets 00:10:44.579 slat (usec): min=2, max=9699, avg=127.00, stdev=726.51 00:10:44.579 clat (usec): min=910, max=94221, avg=18087.43, stdev=15463.53 00:10:44.579 lat (usec): min=954, max=94225, avg=18214.43, stdev=15557.98 00:10:44.579 clat percentiles (usec): 00:10:44.579 | 1.00th=[ 3523], 5.00th=[ 4752], 10.00th=[ 5538], 20.00th=[ 7570], 00:10:44.579 | 30.00th=[ 8094], 40.00th=[ 8848], 50.00th=[ 9896], 60.00th=[16909], 00:10:44.579 | 70.00th=[19792], 80.00th=[27132], 90.00th=[45876], 95.00th=[55313], 00:10:44.579 | 99.00th=[59507], 99.50th=[60556], 99.90th=[83362], 99.95th=[83362], 00:10:44.579 | 99.99th=[93848] 00:10:44.579 bw ( KiB/s): min= 9952, max=19992, per=22.21%, avg=14972.00, stdev=7099.35, samples=2 00:10:44.579 iops : min= 2488, max= 4998, avg=3743.00, stdev=1774.84, samples=2 00:10:44.579 lat (usec) : 1000=0.04% 00:10:44.579 lat (msec) : 2=0.09%, 4=1.46%, 10=42.49%, 20=25.26%, 50=25.93% 00:10:44.579 lat (msec) : 100=4.72% 00:10:44.579 cpu : usr=3.18%, sys=4.08%, ctx=340, majf=0, minf=1 00:10:44.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:44.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.579 issued rwts: total=3584,3871,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.579 job2: (groupid=0, jobs=1): err= 0: pid=188768: Sat Dec 14 02:51:59 2024 00:10:44.579 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:10:44.579 slat (nsec): min=1319, max=36233k, avg=105461.53, stdev=980877.30 00:10:44.579 clat (usec): min=980, max=91666, avg=14013.27, stdev=9944.05 00:10:44.579 lat (usec): min=1000, max=91684, avg=14118.73, stdev=10028.94 00:10:44.579 clat percentiles (usec): 00:10:44.579 | 1.00th=[ 3687], 5.00th=[ 5866], 10.00th=[ 8455], 20.00th=[10290], 00:10:44.579 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11731], 00:10:44.579 | 70.00th=[13173], 80.00th=[14222], 90.00th=[15533], 95.00th=[39060], 00:10:44.579 | 99.00th=[58983], 99.50th=[60031], 99.90th=[60031], 99.95th=[60556], 00:10:44.579 | 99.99th=[91751] 00:10:44.579 write: IOPS=4528, BW=17.7MiB/s (18.6MB/s)(17.7MiB/1002msec); 0 zone resets 00:10:44.579 slat (usec): min=2, max=77828, avg=112.80, stdev=1441.13 00:10:44.579 clat (usec): min=542, max=124263, avg=12637.98, stdev=8455.96 00:10:44.579 lat (msec): min=5, max=124, avg=12.75, stdev= 8.66 00:10:44.579 clat percentiles (msec): 00:10:44.579 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 11], 00:10:44.579 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:10:44.579 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 18], 95.00th=[ 22], 00:10:44.579 | 99.00th=[ 24], 99.50th=[ 80], 99.90th=[ 125], 99.95th=[ 125], 00:10:44.579 | 99.99th=[ 125] 00:10:44.579 bw ( KiB/s): min=12288, max=22992, per=26.17%, avg=17640.00, stdev=7568.87, samples=2 00:10:44.579 iops : min= 3072, max= 5748, avg=4410.00, stdev=1892.22, samples=2 00:10:44.579 lat (usec) : 750=0.01%, 1000=0.02% 00:10:44.579 lat (msec) : 2=0.30%, 4=0.21%, 10=14.26%, 20=78.10%, 50=5.26% 00:10:44.579 lat (msec) : 100=1.64%, 250=0.20% 00:10:44.579 cpu : usr=3.10%, sys=6.19%, ctx=470, majf=0, minf=1 00:10:44.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:44.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.579 issued rwts: total=4096,4538,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.579 job3: (groupid=0, jobs=1): err= 0: pid=188769: Sat Dec 14 02:51:59 2024 00:10:44.579 read: IOPS=3218, BW=12.6MiB/s (13.2MB/s)(13.2MiB/1046msec) 00:10:44.579 slat (nsec): min=1246, max=20947k, avg=134334.36, stdev=931575.19 00:10:44.579 clat (usec): min=3829, max=58704, avg=18702.22, stdev=9175.51 00:10:44.579 lat (usec): min=3834, max=62966, avg=18836.55, stdev=9219.05 00:10:44.579 clat percentiles (usec): 00:10:44.579 | 1.00th=[ 9241], 5.00th=[11731], 10.00th=[12911], 20.00th=[13829], 00:10:44.579 | 30.00th=[14484], 40.00th=[15008], 50.00th=[15664], 60.00th=[17171], 00:10:44.579 | 70.00th=[18482], 80.00th=[20579], 90.00th=[26346], 95.00th=[46924], 00:10:44.579 | 99.00th=[58459], 99.50th=[58459], 99.90th=[58459], 99.95th=[58459], 00:10:44.579 | 99.99th=[58459] 00:10:44.579 write: IOPS=3426, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1046msec); 0 zone resets 00:10:44.579 slat (usec): min=2, max=19611, avg=147.80, stdev=946.08 00:10:44.579 clat (usec): min=6000, max=66368, avg=19181.48, stdev=11822.47 00:10:44.579 lat (usec): min=6007, max=66373, avg=19329.28, stdev=11903.62 00:10:44.579 clat percentiles (usec): 00:10:44.579 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[11207], 20.00th=[13173], 00:10:44.579 | 30.00th=[13698], 40.00th=[14222], 50.00th=[14746], 60.00th=[15401], 00:10:44.579 | 70.00th=[16712], 80.00th=[22938], 90.00th=[35390], 95.00th=[53216], 00:10:44.579 | 99.00th=[60556], 99.50th=[62653], 99.90th=[66323], 99.95th=[66323], 00:10:44.579 | 99.99th=[66323] 00:10:44.579 bw ( KiB/s): min=13136, max=15536, per=21.27%, avg=14336.00, stdev=1697.06, samples=2 00:10:44.579 iops : min= 3284, max= 3884, avg=3584.00, stdev=424.26, samples=2 00:10:44.579 lat (msec) : 4=0.12%, 10=4.57%, 20=72.21%, 50=18.47%, 100=4.63% 00:10:44.579 cpu : usr=2.01%, sys=4.40%, ctx=239, majf=0, minf=1 00:10:44.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:44.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.579 issued rwts: total=3367,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.579 00:10:44.579 Run status group 0 (all jobs): 00:10:44.579 READ: bw=61.3MiB/s (64.3MB/s), 12.6MiB/s-20.8MiB/s (13.2MB/s-21.8MB/s), io=64.1MiB (67.2MB), run=1002-1046msec 00:10:44.579 WRITE: bw=65.8MiB/s (69.0MB/s), 13.4MiB/s-21.9MiB/s (14.0MB/s-22.9MB/s), io=68.8MiB (72.2MB), run=1002-1046msec 00:10:44.579 00:10:44.579 Disk stats (read/write): 00:10:44.579 nvme0n1: ios=4496/4608, merge=0/0, ticks=47326/51707, in_queue=99033, util=93.69% 00:10:44.579 nvme0n2: ios=2958/3072, merge=0/0, ticks=37071/37745, in_queue=74816, util=84.69% 00:10:44.579 nvme0n3: ios=3106/3383, merge=0/0, ticks=26766/28538, in_queue=55304, util=99.67% 00:10:44.579 nvme0n4: ios=2591/2852, merge=0/0, ticks=25204/29900, in_queue=55104, util=98.68% 00:10:44.579 02:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:44.579 [global] 00:10:44.579 thread=1 00:10:44.579 invalidate=1 00:10:44.579 rw=randwrite 00:10:44.579 time_based=1 00:10:44.579 runtime=1 00:10:44.579 ioengine=libaio 00:10:44.579 direct=1 00:10:44.579 bs=4096 00:10:44.579 iodepth=128 00:10:44.579 norandommap=0 00:10:44.579 numjobs=1 00:10:44.579 00:10:44.579 verify_dump=1 00:10:44.579 verify_backlog=512 00:10:44.579 verify_state_save=0 00:10:44.579 do_verify=1 00:10:44.579 verify=crc32c-intel 00:10:44.579 [job0] 00:10:44.579 filename=/dev/nvme0n1 00:10:44.579 [job1] 00:10:44.579 filename=/dev/nvme0n2 00:10:44.579 [job2] 00:10:44.579 filename=/dev/nvme0n3 00:10:44.579 [job3] 00:10:44.579 filename=/dev/nvme0n4 00:10:44.579 Could not set queue depth (nvme0n1) 00:10:44.579 Could not set queue depth (nvme0n2) 00:10:44.579 Could not set queue depth (nvme0n3) 00:10:44.579 Could not set queue depth (nvme0n4) 00:10:44.838 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.838 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.838 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.838 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.838 fio-3.35 00:10:44.838 Starting 4 threads 00:10:46.218 00:10:46.218 job0: (groupid=0, jobs=1): err= 0: pid=189140: Sat Dec 14 02:52:01 2024 00:10:46.218 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:10:46.218 slat (nsec): min=1512, max=19544k, avg=92603.04, stdev=552239.93 00:10:46.218 clat (usec): min=6647, max=54890, avg=11932.21, stdev=6780.58 00:10:46.218 lat (usec): min=6699, max=54892, avg=12024.81, stdev=6804.08 00:10:46.218 clat percentiles (usec): 00:10:46.218 | 1.00th=[ 7504], 5.00th=[ 8029], 10.00th=[ 8455], 20.00th=[ 9110], 00:10:46.218 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[10945], 00:10:46.218 | 70.00th=[11469], 80.00th=[12256], 90.00th=[13566], 95.00th=[21890], 00:10:46.218 | 99.00th=[52691], 99.50th=[53740], 99.90th=[54789], 99.95th=[54789], 00:10:46.218 | 99.99th=[54789] 00:10:46.218 write: IOPS=5229, BW=20.4MiB/s (21.4MB/s)(20.6MiB/1009msec); 0 zone resets 00:10:46.218 slat (usec): min=2, max=11823, avg=96.16, stdev=533.88 00:10:46.218 clat (usec): min=616, max=53845, avg=12481.32, stdev=7208.00 00:10:46.218 lat (usec): min=6438, max=53850, avg=12577.48, stdev=7239.12 00:10:46.218 clat percentiles (usec): 00:10:46.218 | 1.00th=[ 6783], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 8717], 00:10:46.218 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[10683], 00:10:46.218 | 70.00th=[11469], 80.00th=[12780], 90.00th=[20579], 95.00th=[26608], 00:10:46.218 | 99.00th=[51643], 99.50th=[53216], 99.90th=[53740], 99.95th=[53740], 00:10:46.218 | 99.99th=[53740] 00:10:46.218 bw ( KiB/s): min=17432, max=23760, per=30.73%, avg=20596.00, stdev=4474.57, samples=2 00:10:46.218 iops : min= 4358, max= 5940, avg=5149.00, stdev=1118.64, samples=2 00:10:46.218 lat (usec) : 750=0.01% 00:10:46.218 lat (msec) : 10=40.69%, 20=51.36%, 50=6.42%, 100=1.51% 00:10:46.218 cpu : usr=3.27%, sys=4.96%, ctx=622, majf=0, minf=1 00:10:46.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:46.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.218 issued rwts: total=5120,5277,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.218 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.218 job1: (groupid=0, jobs=1): err= 0: pid=189141: Sat Dec 14 02:52:01 2024 00:10:46.218 read: IOPS=3014, BW=11.8MiB/s (12.3MB/s)(11.9MiB/1009msec) 00:10:46.218 slat (nsec): min=1255, max=19668k, avg=140578.78, stdev=956575.61 00:10:46.218 clat (usec): min=4211, max=72244, avg=16520.89, stdev=9849.45 00:10:46.218 lat (usec): min=5958, max=72255, avg=16661.46, stdev=9932.83 00:10:46.218 clat percentiles (usec): 00:10:46.218 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[10814], 20.00th=[11469], 00:10:46.218 | 30.00th=[12125], 40.00th=[13042], 50.00th=[13829], 60.00th=[14353], 00:10:46.218 | 70.00th=[15270], 80.00th=[19530], 90.00th=[23462], 95.00th=[37487], 00:10:46.218 | 99.00th=[63701], 99.50th=[69731], 99.90th=[71828], 99.95th=[71828], 00:10:46.218 | 99.99th=[71828] 00:10:46.218 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec); 0 zone resets 00:10:46.218 slat (usec): min=2, max=25390, avg=176.95, stdev=1070.42 00:10:46.218 clat (usec): min=1461, max=72252, avg=25299.56, stdev=14110.94 00:10:46.218 lat (usec): min=1472, max=72274, avg=25476.51, stdev=14218.88 00:10:46.218 clat percentiles (usec): 00:10:46.218 | 1.00th=[ 2507], 5.00th=[ 7177], 10.00th=[ 9765], 20.00th=[11600], 00:10:46.218 | 30.00th=[13173], 40.00th=[19530], 50.00th=[21627], 60.00th=[26608], 00:10:46.218 | 70.00th=[32113], 80.00th=[38536], 90.00th=[47973], 95.00th=[51119], 00:10:46.218 | 99.00th=[55313], 99.50th=[55837], 99.90th=[66323], 99.95th=[71828], 00:10:46.218 | 99.99th=[71828] 00:10:46.218 bw ( KiB/s): min= 9040, max=15536, per=18.33%, avg=12288.00, stdev=4593.37, samples=2 00:10:46.218 iops : min= 2260, max= 3884, avg=3072.00, stdev=1148.34, samples=2 00:10:46.218 lat (msec) : 2=0.44%, 4=0.36%, 10=8.77%, 20=52.81%, 50=32.97% 00:10:46.218 lat (msec) : 100=4.65% 00:10:46.218 cpu : usr=2.38%, sys=4.96%, ctx=297, majf=0, minf=1 00:10:46.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:46.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.218 issued rwts: total=3042,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.218 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.218 job2: (groupid=0, jobs=1): err= 0: pid=189142: Sat Dec 14 02:52:01 2024 00:10:46.218 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:10:46.218 slat (nsec): min=1115, max=12545k, avg=108071.29, stdev=662396.10 00:10:46.218 clat (usec): min=5062, max=36566, avg=13791.37, stdev=2975.73 00:10:46.218 lat (usec): min=5068, max=36573, avg=13899.44, stdev=3030.10 00:10:46.218 clat percentiles (usec): 00:10:46.218 | 1.00th=[ 8291], 5.00th=[10552], 10.00th=[10945], 20.00th=[11600], 00:10:46.218 | 30.00th=[11994], 40.00th=[12780], 50.00th=[13435], 60.00th=[13829], 00:10:46.218 | 70.00th=[14222], 80.00th=[15008], 90.00th=[17433], 95.00th=[20055], 00:10:46.218 | 99.00th=[23725], 99.50th=[23987], 99.90th=[24773], 99.95th=[27395], 00:10:46.218 | 99.99th=[36439] 00:10:46.218 write: IOPS=3849, BW=15.0MiB/s (15.8MB/s)(15.2MiB/1009msec); 0 zone resets 00:10:46.218 slat (nsec): min=1847, max=20133k, avg=147502.21, stdev=979599.74 00:10:46.218 clat (usec): min=695, max=73733, avg=20045.21, stdev=12743.09 00:10:46.218 lat (usec): min=704, max=73736, avg=20192.71, stdev=12841.55 00:10:46.218 clat percentiles (usec): 00:10:46.218 | 1.00th=[ 5276], 5.00th=[ 5932], 10.00th=[ 8848], 20.00th=[11338], 00:10:46.218 | 30.00th=[11994], 40.00th=[13304], 50.00th=[15270], 60.00th=[20841], 00:10:46.218 | 70.00th=[22938], 80.00th=[27395], 90.00th=[33817], 95.00th=[49021], 00:10:46.218 | 99.00th=[67634], 99.50th=[72877], 99.90th=[73925], 99.95th=[73925], 00:10:46.218 | 99.99th=[73925] 00:10:46.218 bw ( KiB/s): min=13664, max=16384, per=22.41%, avg=15024.00, stdev=1923.33, samples=2 00:10:46.218 iops : min= 3416, max= 4096, avg=3756.00, stdev=480.83, samples=2 00:10:46.218 lat (usec) : 750=0.05% 00:10:46.218 lat (msec) : 4=0.20%, 10=6.87%, 20=68.85%, 50=21.48%, 100=2.54% 00:10:46.218 cpu : usr=2.48%, sys=5.75%, ctx=339, majf=0, minf=1 00:10:46.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:46.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.218 issued rwts: total=3584,3884,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.218 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.218 job3: (groupid=0, jobs=1): err= 0: pid=189143: Sat Dec 14 02:52:01 2024 00:10:46.218 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:10:46.218 slat (nsec): min=1024, max=19166k, avg=104017.82, stdev=867768.79 00:10:46.218 clat (usec): min=2469, max=49978, avg=13650.57, stdev=5636.83 00:10:46.218 lat (usec): min=2490, max=49984, avg=13754.58, stdev=5717.49 00:10:46.218 clat percentiles (usec): 00:10:46.218 | 1.00th=[ 3916], 5.00th=[ 7111], 10.00th=[ 9110], 20.00th=[10290], 00:10:46.218 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11994], 60.00th=[12780], 00:10:46.218 | 70.00th=[14484], 80.00th=[16909], 90.00th=[19792], 95.00th=[25035], 00:10:46.218 | 99.00th=[35390], 99.50th=[42730], 99.90th=[50070], 99.95th=[50070], 00:10:46.218 | 99.99th=[50070] 00:10:46.218 write: IOPS=4652, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1005msec); 0 zone resets 00:10:46.218 slat (nsec): min=1822, max=17062k, avg=81469.02, stdev=642445.46 00:10:46.218 clat (usec): min=392, max=49975, avg=13823.56, stdev=9188.21 00:10:46.218 lat (usec): min=402, max=49982, avg=13905.03, stdev=9263.62 00:10:46.218 clat percentiles (usec): 00:10:46.218 | 1.00th=[ 2024], 5.00th=[ 4080], 10.00th=[ 5014], 20.00th=[ 7242], 00:10:46.218 | 30.00th=[ 8848], 40.00th=[10290], 50.00th=[11207], 60.00th=[11994], 00:10:46.218 | 70.00th=[13173], 80.00th=[20841], 90.00th=[27132], 95.00th=[36439], 00:10:46.218 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[43779], 00:10:46.218 | 99.99th=[50070] 00:10:46.218 bw ( KiB/s): min=15984, max=21072, per=27.64%, avg=18528.00, stdev=3597.76, samples=2 00:10:46.218 iops : min= 3996, max= 5268, avg=4632.00, stdev=899.44, samples=2 00:10:46.218 lat (usec) : 500=0.02% 00:10:46.218 lat (msec) : 2=0.43%, 4=2.33%, 10=24.22%, 20=57.66%, 50=15.34% 00:10:46.218 cpu : usr=2.89%, sys=4.98%, ctx=358, majf=0, minf=2 00:10:46.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:46.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.218 issued rwts: total=4608,4676,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.218 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.218 00:10:46.218 Run status group 0 (all jobs): 00:10:46.218 READ: bw=63.3MiB/s (66.4MB/s), 11.8MiB/s-19.8MiB/s (12.3MB/s-20.8MB/s), io=63.9MiB (67.0MB), run=1005-1009msec 00:10:46.219 WRITE: bw=65.5MiB/s (68.6MB/s), 11.9MiB/s-20.4MiB/s (12.5MB/s-21.4MB/s), io=66.1MiB (69.3MB), run=1005-1009msec 00:10:46.219 00:10:46.219 Disk stats (read/write): 00:10:46.219 nvme0n1: ios=4383/4608, merge=0/0, ticks=13469/11838, in_queue=25307, util=97.70% 00:10:46.219 nvme0n2: ios=2560/2695, merge=0/0, ticks=32466/44508, in_queue=76974, util=83.35% 00:10:46.219 nvme0n3: ios=2769/3072, merge=0/0, ticks=24751/36773, in_queue=61524, util=98.81% 00:10:46.219 nvme0n4: ios=3072/3583, merge=0/0, ticks=44503/54695, in_queue=99198, util=89.22% 00:10:46.219 02:52:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:46.219 02:52:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=189367 00:10:46.219 02:52:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:46.219 02:52:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:46.219 [global] 00:10:46.219 thread=1 00:10:46.219 invalidate=1 00:10:46.219 rw=read 00:10:46.219 time_based=1 00:10:46.219 runtime=10 00:10:46.219 ioengine=libaio 00:10:46.219 direct=1 00:10:46.219 bs=4096 00:10:46.219 iodepth=1 00:10:46.219 norandommap=1 00:10:46.219 numjobs=1 00:10:46.219 00:10:46.219 [job0] 00:10:46.219 filename=/dev/nvme0n1 00:10:46.219 [job1] 00:10:46.219 filename=/dev/nvme0n2 00:10:46.219 [job2] 00:10:46.219 filename=/dev/nvme0n3 00:10:46.219 [job3] 00:10:46.219 filename=/dev/nvme0n4 00:10:46.219 Could not set queue depth (nvme0n1) 00:10:46.219 Could not set queue depth (nvme0n2) 00:10:46.219 Could not set queue depth (nvme0n3) 00:10:46.219 Could not set queue depth (nvme0n4) 00:10:46.477 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.477 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.477 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.477 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.477 fio-3.35 00:10:46.477 Starting 4 threads 00:10:49.008 02:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:49.267 02:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:49.267 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=6008832, buflen=4096 00:10:49.267 fio: pid=189565, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:49.526 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=53899264, buflen=4096 00:10:49.526 fio: pid=189557, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:49.526 02:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:49.526 02:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:49.785 02:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:49.785 02:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:49.785 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=446464, buflen=4096 00:10:49.785 fio: pid=189521, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:50.045 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=10915840, buflen=4096 00:10:50.045 fio: pid=189536, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:50.045 02:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.045 02:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:50.045 00:10:50.045 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=189521: Sat Dec 14 02:52:04 2024 00:10:50.045 read: IOPS=34, BW=138KiB/s (141kB/s)(436KiB/3168msec) 00:10:50.045 slat (nsec): min=7400, max=71049, avg=19142.16, stdev=9265.21 00:10:50.045 clat (usec): min=182, max=44541, avg=28846.97, stdev=18930.92 00:10:50.045 lat (usec): min=190, max=44562, avg=28865.64, stdev=18936.00 00:10:50.045 clat percentiles (usec): 00:10:50.045 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 206], 20.00th=[ 219], 00:10:50.045 | 30.00th=[ 603], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:50.045 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:10:50.045 | 99.00th=[43779], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:10:50.045 | 99.99th=[44303] 00:10:50.045 bw ( KiB/s): min= 96, max= 336, per=0.67%, avg=139.17, stdev=96.55, samples=6 00:10:50.045 iops : min= 24, max= 84, avg=34.67, stdev=24.19, samples=6 00:10:50.045 lat (usec) : 250=23.64%, 500=4.55%, 750=1.82% 00:10:50.045 lat (msec) : 50=69.09% 00:10:50.045 cpu : usr=0.06%, sys=0.06%, ctx=112, majf=0, minf=1 00:10:50.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.045 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.045 issued rwts: total=110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.045 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=189536: Sat Dec 14 02:52:04 2024 00:10:50.045 read: IOPS=795, BW=3182KiB/s (3258kB/s)(10.4MiB/3350msec) 00:10:50.045 slat (usec): min=3, max=13680, avg=27.28, stdev=409.86 00:10:50.045 clat (usec): min=155, max=42088, avg=1219.64, stdev=6306.23 00:10:50.045 lat (usec): min=162, max=42109, avg=1246.93, stdev=6318.48 00:10:50.045 clat percentiles (usec): 00:10:50.045 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 196], 00:10:50.045 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 221], 00:10:50.045 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 258], 95.00th=[ 269], 00:10:50.045 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:50.045 | 99.99th=[42206] 00:10:50.045 bw ( KiB/s): min= 104, max=11236, per=9.62%, avg=1998.00, stdev=4525.78, samples=6 00:10:50.045 iops : min= 26, max= 2809, avg=499.50, stdev=1131.44, samples=6 00:10:50.045 lat (usec) : 250=82.97%, 500=14.44%, 750=0.04% 00:10:50.045 lat (msec) : 10=0.08%, 50=2.44% 00:10:50.045 cpu : usr=0.27%, sys=0.81%, ctx=2675, majf=0, minf=2 00:10:50.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.045 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.045 issued rwts: total=2666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.045 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=189557: Sat Dec 14 02:52:04 2024 00:10:50.045 read: IOPS=4474, BW=17.5MiB/s (18.3MB/s)(51.4MiB/2941msec) 00:10:50.045 slat (usec): min=6, max=15077, avg=10.05, stdev=166.35 00:10:50.045 clat (usec): min=160, max=8729, avg=209.96, stdev=82.74 00:10:50.045 lat (usec): min=167, max=15371, avg=220.01, stdev=187.01 00:10:50.045 clat percentiles (usec): 00:10:50.045 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:10:50.045 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:10:50.045 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 249], 95.00th=[ 260], 00:10:50.045 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 400], 99.95th=[ 453], 00:10:50.045 | 99.99th=[ 2638] 00:10:50.045 bw ( KiB/s): min=17904, max=18952, per=88.08%, avg=18300.80, stdev=511.61, samples=5 00:10:50.045 iops : min= 4476, max= 4738, avg=4575.20, stdev=127.90, samples=5 00:10:50.045 lat (usec) : 250=90.77%, 500=9.19%, 750=0.02% 00:10:50.045 lat (msec) : 4=0.02%, 10=0.01% 00:10:50.045 cpu : usr=2.48%, sys=6.87%, ctx=13162, majf=0, minf=2 00:10:50.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.045 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.045 issued rwts: total=13160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.045 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=189565: Sat Dec 14 02:52:04 2024 00:10:50.045 read: IOPS=533, BW=2131KiB/s (2182kB/s)(5868KiB/2754msec) 00:10:50.045 slat (nsec): min=8029, max=34283, avg=9574.31, stdev=3238.68 00:10:50.045 clat (usec): min=191, max=42081, avg=1847.28, stdev=7997.01 00:10:50.045 lat (usec): min=200, max=42106, avg=1856.85, stdev=7999.65 00:10:50.045 clat percentiles (usec): 00:10:50.045 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 212], 00:10:50.045 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 231], 00:10:50.045 | 70.00th=[ 235], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 265], 00:10:50.045 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:50.045 | 99.99th=[42206] 00:10:50.045 bw ( KiB/s): min= 96, max= 8816, per=11.25%, avg=2337.60, stdev=3777.07, samples=5 00:10:50.045 iops : min= 24, max= 2204, avg=584.40, stdev=944.27, samples=5 00:10:50.045 lat (usec) : 250=91.69%, 500=4.22%, 750=0.07% 00:10:50.045 lat (msec) : 50=3.95% 00:10:50.045 cpu : usr=0.22%, sys=0.58%, ctx=1471, majf=0, minf=2 00:10:50.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.045 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.045 issued rwts: total=1468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.045 00:10:50.045 Run status group 0 (all jobs): 00:10:50.045 READ: bw=20.3MiB/s (21.3MB/s), 138KiB/s-17.5MiB/s (141kB/s-18.3MB/s), io=68.0MiB (71.3MB), run=2754-3350msec 00:10:50.045 00:10:50.045 Disk stats (read/write): 00:10:50.045 nvme0n1: ios=107/0, merge=0/0, ticks=3063/0, in_queue=3063, util=95.56% 00:10:50.045 nvme0n2: ios=1797/0, merge=0/0, ticks=3715/0, in_queue=3715, util=98.92% 00:10:50.045 nvme0n3: ios=12951/0, merge=0/0, ticks=2585/0, in_queue=2585, util=96.28% 00:10:50.045 nvme0n4: ios=1506/0, merge=0/0, ticks=3623/0, in_queue=3623, util=98.85% 00:10:50.046 02:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.046 02:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:50.304 02:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.304 02:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:50.563 02:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.563 02:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:50.822 02:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.822 02:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:51.082 02:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:51.082 02:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 189367 00:10:51.082 02:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:51.082 02:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:51.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.082 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:51.082 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:51.082 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:51.082 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.082 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:51.082 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.082 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:51.082 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:51.082 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:51.082 nvmf hotplug test: fio failed as expected 00:10:51.082 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:51.342 rmmod nvme_tcp 00:10:51.342 rmmod nvme_fabrics 00:10:51.342 rmmod nvme_keyring 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 186655 ']' 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 186655 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 186655 ']' 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 186655 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 186655 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 186655' 00:10:51.342 killing process with pid 186655 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 186655 00:10:51.342 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 186655 00:10:51.602 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:51.602 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:51.602 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:51.602 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:51.602 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:51.602 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:51.602 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:51.602 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:51.602 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:51.602 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.602 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.602 02:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:54.142 00:10:54.142 real 0m26.821s 00:10:54.142 user 1m47.938s 00:10:54.142 sys 0m8.440s 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.142 ************************************ 00:10:54.142 END TEST nvmf_fio_target 00:10:54.142 ************************************ 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:54.142 ************************************ 00:10:54.142 START TEST nvmf_bdevio 00:10:54.142 ************************************ 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:54.142 * Looking for test storage... 00:10:54.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:54.142 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:54.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.143 --rc genhtml_branch_coverage=1 00:10:54.143 --rc genhtml_function_coverage=1 00:10:54.143 --rc genhtml_legend=1 00:10:54.143 --rc geninfo_all_blocks=1 00:10:54.143 --rc geninfo_unexecuted_blocks=1 00:10:54.143 00:10:54.143 ' 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:54.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.143 --rc genhtml_branch_coverage=1 00:10:54.143 --rc genhtml_function_coverage=1 00:10:54.143 --rc genhtml_legend=1 00:10:54.143 --rc geninfo_all_blocks=1 00:10:54.143 --rc geninfo_unexecuted_blocks=1 00:10:54.143 00:10:54.143 ' 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:54.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.143 --rc genhtml_branch_coverage=1 00:10:54.143 --rc genhtml_function_coverage=1 00:10:54.143 --rc genhtml_legend=1 00:10:54.143 --rc geninfo_all_blocks=1 00:10:54.143 --rc geninfo_unexecuted_blocks=1 00:10:54.143 00:10:54.143 ' 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:54.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.143 --rc genhtml_branch_coverage=1 00:10:54.143 --rc genhtml_function_coverage=1 00:10:54.143 --rc genhtml_legend=1 00:10:54.143 --rc geninfo_all_blocks=1 00:10:54.143 --rc geninfo_unexecuted_blocks=1 00:10:54.143 00:10:54.143 ' 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:54.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:54.143 02:52:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:00.718 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:00.718 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:00.719 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:00.719 Found net devices under 0000:af:00.0: cvl_0_0 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:00.719 Found net devices under 0000:af:00.1: cvl_0_1 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:00.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:11:00.719 00:11:00.719 --- 10.0.0.2 ping statistics --- 00:11:00.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.719 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:11:00.719 00:11:00.719 --- 10.0.0.1 ping statistics --- 00:11:00.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.719 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=193901 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 193901 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 193901 ']' 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.719 02:52:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.719 [2024-12-14 02:52:14.988975] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:00.719 [2024-12-14 02:52:14.989022] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.719 [2024-12-14 02:52:15.067378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.719 [2024-12-14 02:52:15.089546] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.719 [2024-12-14 02:52:15.089581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.719 [2024-12-14 02:52:15.089588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.719 [2024-12-14 02:52:15.089594] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.719 [2024-12-14 02:52:15.089603] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.719 [2024-12-14 02:52:15.090902] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:11:00.719 [2024-12-14 02:52:15.091010] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:11:00.719 [2024-12-14 02:52:15.091115] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.720 [2024-12-14 02:52:15.091117] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.720 [2024-12-14 02:52:15.221689] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.720 Malloc0 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.720 [2024-12-14 02:52:15.290376] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:00.720 { 00:11:00.720 "params": { 00:11:00.720 "name": "Nvme$subsystem", 00:11:00.720 "trtype": "$TEST_TRANSPORT", 00:11:00.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:00.720 "adrfam": "ipv4", 00:11:00.720 "trsvcid": "$NVMF_PORT", 00:11:00.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:00.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:00.720 "hdgst": ${hdgst:-false}, 00:11:00.720 "ddgst": ${ddgst:-false} 00:11:00.720 }, 00:11:00.720 "method": "bdev_nvme_attach_controller" 00:11:00.720 } 00:11:00.720 EOF 00:11:00.720 )") 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:00.720 02:52:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:00.720 "params": { 00:11:00.720 "name": "Nvme1", 00:11:00.720 "trtype": "tcp", 00:11:00.720 "traddr": "10.0.0.2", 00:11:00.720 "adrfam": "ipv4", 00:11:00.720 "trsvcid": "4420", 00:11:00.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:00.720 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:00.720 "hdgst": false, 00:11:00.720 "ddgst": false 00:11:00.720 }, 00:11:00.720 "method": "bdev_nvme_attach_controller" 00:11:00.720 }' 00:11:00.720 [2024-12-14 02:52:15.342244] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:00.720 [2024-12-14 02:52:15.342290] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid193929 ] 00:11:00.720 [2024-12-14 02:52:15.416617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:00.720 [2024-12-14 02:52:15.441511] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.720 [2024-12-14 02:52:15.441621] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.720 [2024-12-14 02:52:15.441622] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.720 I/O targets: 00:11:00.720 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:00.720 00:11:00.720 00:11:00.720 CUnit - A unit testing framework for C - Version 2.1-3 00:11:00.720 http://cunit.sourceforge.net/ 00:11:00.720 00:11:00.720 00:11:00.720 Suite: bdevio tests on: Nvme1n1 00:11:00.720 Test: blockdev write read block ...passed 00:11:00.720 Test: blockdev write zeroes read block ...passed 00:11:00.720 Test: blockdev write zeroes read no split ...passed 00:11:00.720 Test: blockdev write zeroes read split ...passed 00:11:00.720 Test: blockdev write zeroes read split partial ...passed 00:11:00.720 Test: blockdev reset ...[2024-12-14 02:52:15.798153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:00.720 [2024-12-14 02:52:15.798215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f36340 (9): Bad file descriptor 00:11:00.979 [2024-12-14 02:52:15.948457] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:00.979 passed 00:11:00.979 Test: blockdev write read 8 blocks ...passed 00:11:00.979 Test: blockdev write read size > 128k ...passed 00:11:00.979 Test: blockdev write read invalid size ...passed 00:11:00.979 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:00.979 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:00.979 Test: blockdev write read max offset ...passed 00:11:01.238 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:01.238 Test: blockdev writev readv 8 blocks ...passed 00:11:01.238 Test: blockdev writev readv 30 x 1block ...passed 00:11:01.238 Test: blockdev writev readv block ...passed 00:11:01.238 Test: blockdev writev readv size > 128k ...passed 00:11:01.238 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:01.238 Test: blockdev comparev and writev ...[2024-12-14 02:52:16.202969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.238 [2024-12-14 02:52:16.203006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:01.238 [2024-12-14 02:52:16.203021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.238 [2024-12-14 02:52:16.203029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:01.238 [2024-12-14 02:52:16.203275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.238 [2024-12-14 02:52:16.203285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:01.238 [2024-12-14 02:52:16.203296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.238 [2024-12-14 02:52:16.203304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:01.238 [2024-12-14 02:52:16.203533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.238 [2024-12-14 02:52:16.203542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:01.238 [2024-12-14 02:52:16.203553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.238 [2024-12-14 02:52:16.203560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:01.238 [2024-12-14 02:52:16.203805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.238 [2024-12-14 02:52:16.203815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:01.239 [2024-12-14 02:52:16.203825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.239 [2024-12-14 02:52:16.203833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:01.239 passed 00:11:01.239 Test: blockdev nvme passthru rw ...passed 00:11:01.239 Test: blockdev nvme passthru vendor specific ...[2024-12-14 02:52:16.285686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:01.239 [2024-12-14 02:52:16.285702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:01.239 [2024-12-14 02:52:16.285806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:01.239 [2024-12-14 02:52:16.285816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:01.239 [2024-12-14 02:52:16.285919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:01.239 [2024-12-14 02:52:16.285928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:01.239 [2024-12-14 02:52:16.286027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:01.239 [2024-12-14 02:52:16.286036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:01.239 passed 00:11:01.239 Test: blockdev nvme admin passthru ...passed 00:11:01.239 Test: blockdev copy ...passed 00:11:01.239 00:11:01.239 Run Summary: Type Total Ran Passed Failed Inactive 00:11:01.239 suites 1 1 n/a 0 0 00:11:01.239 tests 23 23 23 0 0 00:11:01.239 asserts 152 152 152 0 n/a 00:11:01.239 00:11:01.239 Elapsed time = 1.409 seconds 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:01.498 rmmod nvme_tcp 00:11:01.498 rmmod nvme_fabrics 00:11:01.498 rmmod nvme_keyring 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 193901 ']' 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 193901 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 193901 ']' 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 193901 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 193901 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 193901' 00:11:01.498 killing process with pid 193901 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 193901 00:11:01.498 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 193901 00:11:01.758 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:01.758 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:01.758 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:01.758 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:01.758 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:01.758 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:01.758 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:01.758 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:01.758 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:01.758 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.758 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.758 02:52:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.319 02:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:04.319 00:11:04.319 real 0m10.063s 00:11:04.319 user 0m10.687s 00:11:04.319 sys 0m4.943s 00:11:04.319 02:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.319 02:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.319 ************************************ 00:11:04.319 END TEST nvmf_bdevio 00:11:04.319 ************************************ 00:11:04.319 02:52:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:04.319 00:11:04.319 real 4m33.871s 00:11:04.319 user 10m17.724s 00:11:04.319 sys 1m35.046s 00:11:04.319 02:52:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.319 02:52:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:04.319 ************************************ 00:11:04.319 END TEST nvmf_target_core 00:11:04.319 ************************************ 00:11:04.319 02:52:18 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:04.319 02:52:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:04.319 02:52:18 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.319 02:52:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:04.319 ************************************ 00:11:04.319 START TEST nvmf_target_extra 00:11:04.319 ************************************ 00:11:04.319 02:52:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:04.319 * Looking for test storage... 00:11:04.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:04.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.319 --rc genhtml_branch_coverage=1 00:11:04.319 --rc genhtml_function_coverage=1 00:11:04.319 --rc genhtml_legend=1 00:11:04.319 --rc geninfo_all_blocks=1 00:11:04.319 --rc geninfo_unexecuted_blocks=1 00:11:04.319 00:11:04.319 ' 00:11:04.319 02:52:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:04.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.320 --rc genhtml_branch_coverage=1 00:11:04.320 --rc genhtml_function_coverage=1 00:11:04.320 --rc genhtml_legend=1 00:11:04.320 --rc geninfo_all_blocks=1 00:11:04.320 --rc geninfo_unexecuted_blocks=1 00:11:04.320 00:11:04.320 ' 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:04.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.320 --rc genhtml_branch_coverage=1 00:11:04.320 --rc genhtml_function_coverage=1 00:11:04.320 --rc genhtml_legend=1 00:11:04.320 --rc geninfo_all_blocks=1 00:11:04.320 --rc geninfo_unexecuted_blocks=1 00:11:04.320 00:11:04.320 ' 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:04.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.320 --rc genhtml_branch_coverage=1 00:11:04.320 --rc genhtml_function_coverage=1 00:11:04.320 --rc genhtml_legend=1 00:11:04.320 --rc geninfo_all_blocks=1 00:11:04.320 --rc geninfo_unexecuted_blocks=1 00:11:04.320 00:11:04.320 ' 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:04.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:04.320 ************************************ 00:11:04.320 START TEST nvmf_example 00:11:04.320 ************************************ 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:04.320 * Looking for test storage... 00:11:04.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:04.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.320 --rc genhtml_branch_coverage=1 00:11:04.320 --rc genhtml_function_coverage=1 00:11:04.320 --rc genhtml_legend=1 00:11:04.320 --rc geninfo_all_blocks=1 00:11:04.320 --rc geninfo_unexecuted_blocks=1 00:11:04.320 00:11:04.320 ' 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:04.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.320 --rc genhtml_branch_coverage=1 00:11:04.320 --rc genhtml_function_coverage=1 00:11:04.320 --rc genhtml_legend=1 00:11:04.320 --rc geninfo_all_blocks=1 00:11:04.320 --rc geninfo_unexecuted_blocks=1 00:11:04.320 00:11:04.320 ' 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:04.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.320 --rc genhtml_branch_coverage=1 00:11:04.320 --rc genhtml_function_coverage=1 00:11:04.320 --rc genhtml_legend=1 00:11:04.320 --rc geninfo_all_blocks=1 00:11:04.320 --rc geninfo_unexecuted_blocks=1 00:11:04.320 00:11:04.320 ' 00:11:04.320 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:04.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.320 --rc genhtml_branch_coverage=1 00:11:04.320 --rc genhtml_function_coverage=1 00:11:04.320 --rc genhtml_legend=1 00:11:04.320 --rc geninfo_all_blocks=1 00:11:04.320 --rc geninfo_unexecuted_blocks=1 00:11:04.320 00:11:04.320 ' 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:04.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:04.321 02:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:10.892 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:10.892 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:10.893 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:10.893 Found net devices under 0000:af:00.0: cvl_0_0 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:10.893 Found net devices under 0000:af:00.1: cvl_0_1 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:10.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:11:10.893 00:11:10.893 --- 10.0.0.2 ping statistics --- 00:11:10.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.893 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:10.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:11:10.893 00:11:10.893 --- 10.0.0.1 ping statistics --- 00:11:10.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.893 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=197888 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 197888 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 197888 ']' 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.893 02:52:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.461 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.461 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:11.461 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:11.461 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:11.461 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.461 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:11.461 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.461 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.461 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.461 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:11.461 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.461 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.461 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.461 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:11.461 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:11.461 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.461 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.461 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.462 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:11.462 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:11.462 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.462 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.462 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.462 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:11.462 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.462 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.462 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.462 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:11.462 02:52:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:23.670 Initializing NVMe Controllers 00:11:23.670 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:23.670 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:23.670 Initialization complete. Launching workers. 00:11:23.670 ======================================================== 00:11:23.670 Latency(us) 00:11:23.670 Device Information : IOPS MiB/s Average min max 00:11:23.670 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18454.44 72.09 3469.37 682.72 15631.65 00:11:23.670 ======================================================== 00:11:23.670 Total : 18454.44 72.09 3469.37 682.72 15631.65 00:11:23.670 00:11:23.670 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:23.670 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:23.670 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:23.670 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:23.670 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:23.670 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:23.670 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:23.670 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:23.670 rmmod nvme_tcp 00:11:23.670 rmmod nvme_fabrics 00:11:23.670 rmmod nvme_keyring 00:11:23.670 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:23.670 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:23.670 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:23.670 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 197888 ']' 00:11:23.670 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 197888 00:11:23.670 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 197888 ']' 00:11:23.670 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 197888 00:11:23.670 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:23.670 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.670 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 197888 00:11:23.670 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:23.670 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:23.670 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 197888' 00:11:23.670 killing process with pid 197888 00:11:23.670 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 197888 00:11:23.670 02:52:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 197888 00:11:23.670 nvmf threads initialize successfully 00:11:23.670 bdev subsystem init successfully 00:11:23.670 created a nvmf target service 00:11:23.670 create targets's poll groups done 00:11:23.670 all subsystems of target started 00:11:23.670 nvmf target is running 00:11:23.670 all subsystems of target stopped 00:11:23.670 destroy targets's poll groups done 00:11:23.670 destroyed the nvmf target service 00:11:23.670 bdev subsystem finish successfully 00:11:23.670 nvmf threads destroy successfully 00:11:23.670 02:52:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:23.670 02:52:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:23.670 02:52:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:23.670 02:52:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:23.670 02:52:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:23.670 02:52:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:23.670 02:52:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:23.670 02:52:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:23.670 02:52:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:23.670 02:52:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.670 02:52:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.670 02:52:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.237 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:24.237 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:24.237 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:24.237 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:24.237 00:11:24.237 real 0m19.974s 00:11:24.237 user 0m46.582s 00:11:24.237 sys 0m6.006s 00:11:24.237 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.237 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:24.237 ************************************ 00:11:24.237 END TEST nvmf_example 00:11:24.237 ************************************ 00:11:24.237 02:52:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:24.237 02:52:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:24.237 02:52:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.237 02:52:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:24.237 ************************************ 00:11:24.237 START TEST nvmf_filesystem 00:11:24.237 ************************************ 00:11:24.237 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:24.237 * Looking for test storage... 00:11:24.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.238 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:24.238 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:24.238 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:24.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.500 --rc genhtml_branch_coverage=1 00:11:24.500 --rc genhtml_function_coverage=1 00:11:24.500 --rc genhtml_legend=1 00:11:24.500 --rc geninfo_all_blocks=1 00:11:24.500 --rc geninfo_unexecuted_blocks=1 00:11:24.500 00:11:24.500 ' 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:24.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.500 --rc genhtml_branch_coverage=1 00:11:24.500 --rc genhtml_function_coverage=1 00:11:24.500 --rc genhtml_legend=1 00:11:24.500 --rc geninfo_all_blocks=1 00:11:24.500 --rc geninfo_unexecuted_blocks=1 00:11:24.500 00:11:24.500 ' 00:11:24.500 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:24.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.500 --rc genhtml_branch_coverage=1 00:11:24.500 --rc genhtml_function_coverage=1 00:11:24.500 --rc genhtml_legend=1 00:11:24.500 --rc geninfo_all_blocks=1 00:11:24.501 --rc geninfo_unexecuted_blocks=1 00:11:24.501 00:11:24.501 ' 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:24.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.501 --rc genhtml_branch_coverage=1 00:11:24.501 --rc genhtml_function_coverage=1 00:11:24.501 --rc genhtml_legend=1 00:11:24.501 --rc geninfo_all_blocks=1 00:11:24.501 --rc geninfo_unexecuted_blocks=1 00:11:24.501 00:11:24.501 ' 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:24.501 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:24.502 #define SPDK_CONFIG_H 00:11:24.502 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:24.502 #define SPDK_CONFIG_APPS 1 00:11:24.502 #define SPDK_CONFIG_ARCH native 00:11:24.502 #undef SPDK_CONFIG_ASAN 00:11:24.502 #undef SPDK_CONFIG_AVAHI 00:11:24.502 #undef SPDK_CONFIG_CET 00:11:24.502 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:24.502 #define SPDK_CONFIG_COVERAGE 1 00:11:24.502 #define SPDK_CONFIG_CROSS_PREFIX 00:11:24.502 #undef SPDK_CONFIG_CRYPTO 00:11:24.502 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:24.502 #undef SPDK_CONFIG_CUSTOMOCF 00:11:24.502 #undef SPDK_CONFIG_DAOS 00:11:24.502 #define SPDK_CONFIG_DAOS_DIR 00:11:24.502 #define SPDK_CONFIG_DEBUG 1 00:11:24.502 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:24.502 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:24.502 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:24.502 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:24.502 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:24.502 #undef SPDK_CONFIG_DPDK_UADK 00:11:24.502 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:24.502 #define SPDK_CONFIG_EXAMPLES 1 00:11:24.502 #undef SPDK_CONFIG_FC 00:11:24.502 #define SPDK_CONFIG_FC_PATH 00:11:24.502 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:24.502 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:24.502 #define SPDK_CONFIG_FSDEV 1 00:11:24.502 #undef SPDK_CONFIG_FUSE 00:11:24.502 #undef SPDK_CONFIG_FUZZER 00:11:24.502 #define SPDK_CONFIG_FUZZER_LIB 00:11:24.502 #undef SPDK_CONFIG_GOLANG 00:11:24.502 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:24.502 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:24.502 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:24.502 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:24.502 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:24.502 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:24.502 #undef SPDK_CONFIG_HAVE_LZ4 00:11:24.502 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:24.502 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:24.502 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:24.502 #define SPDK_CONFIG_IDXD 1 00:11:24.502 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:24.502 #undef SPDK_CONFIG_IPSEC_MB 00:11:24.502 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:24.502 #define SPDK_CONFIG_ISAL 1 00:11:24.502 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:24.502 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:24.502 #define SPDK_CONFIG_LIBDIR 00:11:24.502 #undef SPDK_CONFIG_LTO 00:11:24.502 #define SPDK_CONFIG_MAX_LCORES 128 00:11:24.502 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:24.502 #define SPDK_CONFIG_NVME_CUSE 1 00:11:24.502 #undef SPDK_CONFIG_OCF 00:11:24.502 #define SPDK_CONFIG_OCF_PATH 00:11:24.502 #define SPDK_CONFIG_OPENSSL_PATH 00:11:24.502 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:24.502 #define SPDK_CONFIG_PGO_DIR 00:11:24.502 #undef SPDK_CONFIG_PGO_USE 00:11:24.502 #define SPDK_CONFIG_PREFIX /usr/local 00:11:24.502 #undef SPDK_CONFIG_RAID5F 00:11:24.502 #undef SPDK_CONFIG_RBD 00:11:24.502 #define SPDK_CONFIG_RDMA 1 00:11:24.502 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:24.502 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:24.502 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:24.502 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:24.502 #define SPDK_CONFIG_SHARED 1 00:11:24.502 #undef SPDK_CONFIG_SMA 00:11:24.502 #define SPDK_CONFIG_TESTS 1 00:11:24.502 #undef SPDK_CONFIG_TSAN 00:11:24.502 #define SPDK_CONFIG_UBLK 1 00:11:24.502 #define SPDK_CONFIG_UBSAN 1 00:11:24.502 #undef SPDK_CONFIG_UNIT_TESTS 00:11:24.502 #undef SPDK_CONFIG_URING 00:11:24.502 #define SPDK_CONFIG_URING_PATH 00:11:24.502 #undef SPDK_CONFIG_URING_ZNS 00:11:24.502 #undef SPDK_CONFIG_USDT 00:11:24.502 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:24.502 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:24.502 #define SPDK_CONFIG_VFIO_USER 1 00:11:24.502 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:24.502 #define SPDK_CONFIG_VHOST 1 00:11:24.502 #define SPDK_CONFIG_VIRTIO 1 00:11:24.502 #undef SPDK_CONFIG_VTUNE 00:11:24.502 #define SPDK_CONFIG_VTUNE_DIR 00:11:24.502 #define SPDK_CONFIG_WERROR 1 00:11:24.502 #define SPDK_CONFIG_WPDK_DIR 00:11:24.502 #undef SPDK_CONFIG_XNVME 00:11:24.502 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:24.502 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:24.503 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:24.504 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 200231 ]] 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 200231 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.SspVyp 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.SspVyp/tests/target /tmp/spdk.SspVyp 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=722997248 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4561432576 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=88598745088 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=95552389120 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6953644032 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47766163456 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776194560 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19087466496 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19110477824 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23011328 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47776014336 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776194560 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=180224 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=9555226624 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9555238912 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:24.505 * Looking for test storage... 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:24.505 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=88598745088 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9168236544 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:24.506 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:24.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.766 --rc genhtml_branch_coverage=1 00:11:24.766 --rc genhtml_function_coverage=1 00:11:24.766 --rc genhtml_legend=1 00:11:24.766 --rc geninfo_all_blocks=1 00:11:24.766 --rc geninfo_unexecuted_blocks=1 00:11:24.766 00:11:24.766 ' 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:24.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.766 --rc genhtml_branch_coverage=1 00:11:24.766 --rc genhtml_function_coverage=1 00:11:24.766 --rc genhtml_legend=1 00:11:24.766 --rc geninfo_all_blocks=1 00:11:24.766 --rc geninfo_unexecuted_blocks=1 00:11:24.766 00:11:24.766 ' 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:24.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.766 --rc genhtml_branch_coverage=1 00:11:24.766 --rc genhtml_function_coverage=1 00:11:24.766 --rc genhtml_legend=1 00:11:24.766 --rc geninfo_all_blocks=1 00:11:24.766 --rc geninfo_unexecuted_blocks=1 00:11:24.766 00:11:24.766 ' 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:24.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.766 --rc genhtml_branch_coverage=1 00:11:24.766 --rc genhtml_function_coverage=1 00:11:24.766 --rc genhtml_legend=1 00:11:24.766 --rc geninfo_all_blocks=1 00:11:24.766 --rc geninfo_unexecuted_blocks=1 00:11:24.766 00:11:24.766 ' 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.766 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:24.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:24.767 02:52:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:31.352 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:31.352 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:31.352 Found net devices under 0000:af:00.0: cvl_0_0 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:31.352 Found net devices under 0000:af:00.1: cvl_0_1 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:31.352 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:31.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:11:31.353 00:11:31.353 --- 10.0.0.2 ping statistics --- 00:11:31.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.353 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:31.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:11:31.353 00:11:31.353 --- 10.0.0.1 ping statistics --- 00:11:31.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.353 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.353 ************************************ 00:11:31.353 START TEST nvmf_filesystem_no_in_capsule 00:11:31.353 ************************************ 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=203227 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 203227 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 203227 ']' 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.353 [2024-12-14 02:52:45.725902] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:31.353 [2024-12-14 02:52:45.725953] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.353 [2024-12-14 02:52:45.804841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:31.353 [2024-12-14 02:52:45.828191] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.353 [2024-12-14 02:52:45.828226] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.353 [2024-12-14 02:52:45.828232] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.353 [2024-12-14 02:52:45.828239] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.353 [2024-12-14 02:52:45.828243] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.353 [2024-12-14 02:52:45.829549] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.353 [2024-12-14 02:52:45.829659] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.353 [2024-12-14 02:52:45.829765] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.353 [2024-12-14 02:52:45.829766] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.353 [2024-12-14 02:52:45.962437] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.353 02:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.353 Malloc1 00:11:31.353 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.353 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:31.353 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.353 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.353 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.353 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:31.353 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.353 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.353 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.353 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:31.353 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.353 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.353 [2024-12-14 02:52:46.125900] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.353 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.353 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:31.353 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:31.353 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:31.353 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:31.353 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:31.353 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:31.354 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.354 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.354 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.354 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:31.354 { 00:11:31.354 "name": "Malloc1", 00:11:31.354 "aliases": [ 00:11:31.354 "76a00159-1612-4016-abd4-18209aa321e5" 00:11:31.354 ], 00:11:31.354 "product_name": "Malloc disk", 00:11:31.354 "block_size": 512, 00:11:31.354 "num_blocks": 1048576, 00:11:31.354 "uuid": "76a00159-1612-4016-abd4-18209aa321e5", 00:11:31.354 "assigned_rate_limits": { 00:11:31.354 "rw_ios_per_sec": 0, 00:11:31.354 "rw_mbytes_per_sec": 0, 00:11:31.354 "r_mbytes_per_sec": 0, 00:11:31.354 "w_mbytes_per_sec": 0 00:11:31.354 }, 00:11:31.354 "claimed": true, 00:11:31.354 "claim_type": "exclusive_write", 00:11:31.354 "zoned": false, 00:11:31.354 "supported_io_types": { 00:11:31.354 "read": true, 00:11:31.354 "write": true, 00:11:31.354 "unmap": true, 00:11:31.354 "flush": true, 00:11:31.354 "reset": true, 00:11:31.354 "nvme_admin": false, 00:11:31.354 "nvme_io": false, 00:11:31.354 "nvme_io_md": false, 00:11:31.354 "write_zeroes": true, 00:11:31.354 "zcopy": true, 00:11:31.354 "get_zone_info": false, 00:11:31.354 "zone_management": false, 00:11:31.354 "zone_append": false, 00:11:31.354 "compare": false, 00:11:31.354 "compare_and_write": false, 00:11:31.354 "abort": true, 00:11:31.354 "seek_hole": false, 00:11:31.354 "seek_data": false, 00:11:31.354 "copy": true, 00:11:31.354 "nvme_iov_md": false 00:11:31.354 }, 00:11:31.354 "memory_domains": [ 00:11:31.354 { 00:11:31.354 "dma_device_id": "system", 00:11:31.354 "dma_device_type": 1 00:11:31.354 }, 00:11:31.354 { 00:11:31.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.354 "dma_device_type": 2 00:11:31.354 } 00:11:31.354 ], 00:11:31.354 "driver_specific": {} 00:11:31.354 } 00:11:31.354 ]' 00:11:31.354 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:31.354 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:31.354 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:31.354 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:31.354 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:31.354 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:31.354 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:31.354 02:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:32.731 02:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:32.731 02:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:32.731 02:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:32.731 02:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:32.731 02:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:34.635 02:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:34.635 02:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:34.635 02:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:34.635 02:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:34.635 02:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:34.635 02:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:34.635 02:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:34.635 02:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:34.635 02:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:34.635 02:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:34.635 02:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:34.635 02:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:34.635 02:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:34.635 02:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:34.635 02:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:34.635 02:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:34.635 02:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:34.894 02:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:35.462 02:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:36.841 02:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:36.841 02:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:36.841 02:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:36.841 02:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.841 02:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.841 ************************************ 00:11:36.841 START TEST filesystem_ext4 00:11:36.841 ************************************ 00:11:36.841 02:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:36.841 02:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:36.841 02:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:36.841 02:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:36.841 02:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:36.841 02:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:36.841 02:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:36.841 02:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:36.841 02:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:36.841 02:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:36.841 02:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:36.841 mke2fs 1.47.0 (5-Feb-2023) 00:11:36.841 Discarding device blocks: 0/522240 done 00:11:36.841 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:36.841 Filesystem UUID: 2a6aa19e-7c92-4e4f-89d8-aef569f17335 00:11:36.841 Superblock backups stored on blocks: 00:11:36.841 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:36.841 00:11:36.841 Allocating group tables: 0/64 done 00:11:36.841 Writing inode tables: 0/64 done 00:11:37.409 Creating journal (8192 blocks): done 00:11:39.282 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:11:39.282 00:11:39.282 02:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:39.282 02:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 203227 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:45.849 00:11:45.849 real 0m8.292s 00:11:45.849 user 0m0.035s 00:11:45.849 sys 0m0.114s 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:45.849 ************************************ 00:11:45.849 END TEST filesystem_ext4 00:11:45.849 ************************************ 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.849 ************************************ 00:11:45.849 START TEST filesystem_btrfs 00:11:45.849 ************************************ 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:45.849 02:52:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:45.849 btrfs-progs v6.8.1 00:11:45.849 See https://btrfs.readthedocs.io for more information. 00:11:45.849 00:11:45.849 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:45.849 NOTE: several default settings have changed in version 5.15, please make sure 00:11:45.849 this does not affect your deployments: 00:11:45.849 - DUP for metadata (-m dup) 00:11:45.849 - enabled no-holes (-O no-holes) 00:11:45.849 - enabled free-space-tree (-R free-space-tree) 00:11:45.849 00:11:45.849 Label: (null) 00:11:45.849 UUID: 1ebfab95-7939-4fbb-b224-ab5ee3a05646 00:11:45.849 Node size: 16384 00:11:45.849 Sector size: 4096 (CPU page size: 4096) 00:11:45.849 Filesystem size: 510.00MiB 00:11:45.849 Block group profiles: 00:11:45.849 Data: single 8.00MiB 00:11:45.849 Metadata: DUP 32.00MiB 00:11:45.849 System: DUP 8.00MiB 00:11:45.849 SSD detected: yes 00:11:45.849 Zoned device: no 00:11:45.849 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:45.849 Checksum: crc32c 00:11:45.849 Number of devices: 1 00:11:45.849 Devices: 00:11:45.849 ID SIZE PATH 00:11:45.849 1 510.00MiB /dev/nvme0n1p1 00:11:45.849 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 203227 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:45.849 00:11:45.849 real 0m0.504s 00:11:45.849 user 0m0.026s 00:11:45.849 sys 0m0.151s 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:45.849 ************************************ 00:11:45.849 END TEST filesystem_btrfs 00:11:45.849 ************************************ 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.849 ************************************ 00:11:45.849 START TEST filesystem_xfs 00:11:45.849 ************************************ 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:45.849 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:45.850 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:45.850 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:45.850 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:45.850 02:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:45.850 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:45.850 = sectsz=512 attr=2, projid32bit=1 00:11:45.850 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:45.850 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:45.850 data = bsize=4096 blocks=130560, imaxpct=25 00:11:45.850 = sunit=0 swidth=0 blks 00:11:45.850 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:45.850 log =internal log bsize=4096 blocks=16384, version=2 00:11:45.850 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:45.850 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:46.788 Discarding blocks...Done. 00:11:46.788 02:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:46.788 02:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:50.083 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:50.083 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:50.083 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:50.083 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:50.083 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:50.083 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:50.083 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 203227 00:11:50.083 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:50.083 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:50.083 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:50.083 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:50.083 00:11:50.083 real 0m4.121s 00:11:50.083 user 0m0.020s 00:11:50.083 sys 0m0.122s 00:11:50.083 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.083 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:50.083 ************************************ 00:11:50.083 END TEST filesystem_xfs 00:11:50.083 ************************************ 00:11:50.083 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:50.083 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:50.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 203227 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 203227 ']' 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 203227 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 203227 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 203227' 00:11:50.084 killing process with pid 203227 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 203227 00:11:50.084 02:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 203227 00:11:50.347 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:50.347 00:11:50.347 real 0m19.637s 00:11:50.347 user 1m17.406s 00:11:50.347 sys 0m1.601s 00:11:50.347 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.347 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.347 ************************************ 00:11:50.347 END TEST nvmf_filesystem_no_in_capsule 00:11:50.347 ************************************ 00:11:50.347 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:50.347 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:50.347 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.347 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:50.347 ************************************ 00:11:50.347 START TEST nvmf_filesystem_in_capsule 00:11:50.347 ************************************ 00:11:50.347 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:50.348 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:50.348 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:50.348 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:50.348 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:50.348 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.348 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=206798 00:11:50.348 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 206798 00:11:50.348 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:50.348 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 206798 ']' 00:11:50.348 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.348 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.348 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.348 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.348 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.348 [2024-12-14 02:53:05.431009] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:50.348 [2024-12-14 02:53:05.431052] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.607 [2024-12-14 02:53:05.510884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:50.607 [2024-12-14 02:53:05.532231] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.607 [2024-12-14 02:53:05.532267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.607 [2024-12-14 02:53:05.532274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.607 [2024-12-14 02:53:05.532280] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.607 [2024-12-14 02:53:05.532285] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.607 [2024-12-14 02:53:05.533684] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.607 [2024-12-14 02:53:05.533791] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.607 [2024-12-14 02:53:05.533900] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.607 [2024-12-14 02:53:05.533902] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.607 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.607 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:50.607 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:50.607 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:50.607 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.608 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.608 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:50.608 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:50.608 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.608 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.608 [2024-12-14 02:53:05.673571] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.608 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.608 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:50.608 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.608 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.868 Malloc1 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.868 [2024-12-14 02:53:05.837506] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:50.868 { 00:11:50.868 "name": "Malloc1", 00:11:50.868 "aliases": [ 00:11:50.868 "67b77554-f40b-49d5-a8c2-1dc5d14236cb" 00:11:50.868 ], 00:11:50.868 "product_name": "Malloc disk", 00:11:50.868 "block_size": 512, 00:11:50.868 "num_blocks": 1048576, 00:11:50.868 "uuid": "67b77554-f40b-49d5-a8c2-1dc5d14236cb", 00:11:50.868 "assigned_rate_limits": { 00:11:50.868 "rw_ios_per_sec": 0, 00:11:50.868 "rw_mbytes_per_sec": 0, 00:11:50.868 "r_mbytes_per_sec": 0, 00:11:50.868 "w_mbytes_per_sec": 0 00:11:50.868 }, 00:11:50.868 "claimed": true, 00:11:50.868 "claim_type": "exclusive_write", 00:11:50.868 "zoned": false, 00:11:50.868 "supported_io_types": { 00:11:50.868 "read": true, 00:11:50.868 "write": true, 00:11:50.868 "unmap": true, 00:11:50.868 "flush": true, 00:11:50.868 "reset": true, 00:11:50.868 "nvme_admin": false, 00:11:50.868 "nvme_io": false, 00:11:50.868 "nvme_io_md": false, 00:11:50.868 "write_zeroes": true, 00:11:50.868 "zcopy": true, 00:11:50.868 "get_zone_info": false, 00:11:50.868 "zone_management": false, 00:11:50.868 "zone_append": false, 00:11:50.868 "compare": false, 00:11:50.868 "compare_and_write": false, 00:11:50.868 "abort": true, 00:11:50.868 "seek_hole": false, 00:11:50.868 "seek_data": false, 00:11:50.868 "copy": true, 00:11:50.868 "nvme_iov_md": false 00:11:50.868 }, 00:11:50.868 "memory_domains": [ 00:11:50.868 { 00:11:50.868 "dma_device_id": "system", 00:11:50.868 "dma_device_type": 1 00:11:50.868 }, 00:11:50.868 { 00:11:50.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.868 "dma_device_type": 2 00:11:50.868 } 00:11:50.868 ], 00:11:50.868 "driver_specific": {} 00:11:50.868 } 00:11:50.868 ]' 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:50.868 02:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:52.248 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:52.248 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:52.248 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.248 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:52.248 02:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:54.154 02:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:54.154 02:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:54.154 02:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.154 02:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:54.154 02:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.154 02:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:54.154 02:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:54.154 02:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:54.154 02:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:54.154 02:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:54.154 02:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:54.154 02:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:54.154 02:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:54.154 02:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:54.154 02:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:54.154 02:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:54.154 02:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:54.154 02:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:54.413 02:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:55.350 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:55.350 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:55.350 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:55.350 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.350 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.350 ************************************ 00:11:55.350 START TEST filesystem_in_capsule_ext4 00:11:55.350 ************************************ 00:11:55.350 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:55.350 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:55.350 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:55.350 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:55.350 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:55.350 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:55.350 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:55.350 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:55.350 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:55.350 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:55.350 02:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:55.350 mke2fs 1.47.0 (5-Feb-2023) 00:11:55.609 Discarding device blocks: 0/522240 done 00:11:55.609 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:55.609 Filesystem UUID: 09a85ea1-c93e-4008-8409-c51d8b18323a 00:11:55.609 Superblock backups stored on blocks: 00:11:55.609 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:55.609 00:11:55.609 Allocating group tables: 0/64 done 00:11:55.609 Writing inode tables: 0/64 done 00:11:58.146 Creating journal (8192 blocks): done 00:11:58.146 Writing superblocks and filesystem accounting information: 0/64 done 00:11:58.146 00:11:58.146 02:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:58.146 02:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 206798 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:03.424 00:12:03.424 real 0m7.814s 00:12:03.424 user 0m0.032s 00:12:03.424 sys 0m0.069s 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:03.424 ************************************ 00:12:03.424 END TEST filesystem_in_capsule_ext4 00:12:03.424 ************************************ 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.424 ************************************ 00:12:03.424 START TEST filesystem_in_capsule_btrfs 00:12:03.424 ************************************ 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:03.424 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:03.424 btrfs-progs v6.8.1 00:12:03.425 See https://btrfs.readthedocs.io for more information. 00:12:03.425 00:12:03.425 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:03.425 NOTE: several default settings have changed in version 5.15, please make sure 00:12:03.425 this does not affect your deployments: 00:12:03.425 - DUP for metadata (-m dup) 00:12:03.425 - enabled no-holes (-O no-holes) 00:12:03.425 - enabled free-space-tree (-R free-space-tree) 00:12:03.425 00:12:03.425 Label: (null) 00:12:03.425 UUID: fa2b6fe2-aad2-47cd-993e-a9e1c44907e1 00:12:03.425 Node size: 16384 00:12:03.425 Sector size: 4096 (CPU page size: 4096) 00:12:03.425 Filesystem size: 510.00MiB 00:12:03.425 Block group profiles: 00:12:03.425 Data: single 8.00MiB 00:12:03.425 Metadata: DUP 32.00MiB 00:12:03.425 System: DUP 8.00MiB 00:12:03.425 SSD detected: yes 00:12:03.425 Zoned device: no 00:12:03.425 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:03.425 Checksum: crc32c 00:12:03.425 Number of devices: 1 00:12:03.425 Devices: 00:12:03.425 ID SIZE PATH 00:12:03.425 1 510.00MiB /dev/nvme0n1p1 00:12:03.425 00:12:03.425 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:03.425 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:03.685 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:03.685 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:03.685 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:03.685 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:03.685 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:03.685 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:03.945 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 206798 00:12:03.945 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:03.945 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:03.945 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:03.945 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:03.945 00:12:03.945 real 0m0.538s 00:12:03.945 user 0m0.026s 00:12:03.945 sys 0m0.114s 00:12:03.945 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.945 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:03.946 ************************************ 00:12:03.946 END TEST filesystem_in_capsule_btrfs 00:12:03.946 ************************************ 00:12:03.946 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:03.946 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:03.946 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.946 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.946 ************************************ 00:12:03.946 START TEST filesystem_in_capsule_xfs 00:12:03.946 ************************************ 00:12:03.946 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:03.946 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:03.946 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:03.946 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:03.946 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:03.946 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:03.946 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:03.946 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:03.946 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:03.946 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:03.946 02:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:03.946 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:03.946 = sectsz=512 attr=2, projid32bit=1 00:12:03.946 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:03.946 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:03.946 data = bsize=4096 blocks=130560, imaxpct=25 00:12:03.946 = sunit=0 swidth=0 blks 00:12:03.946 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:03.946 log =internal log bsize=4096 blocks=16384, version=2 00:12:03.946 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:03.946 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:04.885 Discarding blocks...Done. 00:12:04.885 02:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:04.885 02:53:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:06.796 02:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:06.796 02:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:06.796 02:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:06.796 02:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:06.796 02:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:06.796 02:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:06.796 02:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 206798 00:12:06.796 02:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:06.796 02:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:06.796 02:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:06.796 02:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:06.796 00:12:06.796 real 0m2.681s 00:12:06.796 user 0m0.023s 00:12:06.796 sys 0m0.076s 00:12:06.796 02:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.796 02:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:06.796 ************************************ 00:12:06.796 END TEST filesystem_in_capsule_xfs 00:12:06.796 ************************************ 00:12:06.796 02:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:07.057 02:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:07.057 02:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.057 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:07.057 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:07.057 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:07.057 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.057 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:07.057 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.057 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:07.057 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.057 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.057 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.057 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.057 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:07.057 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 206798 00:12:07.057 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 206798 ']' 00:12:07.057 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 206798 00:12:07.057 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:07.057 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.057 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 206798 00:12:07.057 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:07.057 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:07.057 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 206798' 00:12:07.057 killing process with pid 206798 00:12:07.057 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 206798 00:12:07.057 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 206798 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:07.627 00:12:07.627 real 0m17.094s 00:12:07.627 user 1m7.292s 00:12:07.627 sys 0m1.414s 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.627 ************************************ 00:12:07.627 END TEST nvmf_filesystem_in_capsule 00:12:07.627 ************************************ 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:07.627 rmmod nvme_tcp 00:12:07.627 rmmod nvme_fabrics 00:12:07.627 rmmod nvme_keyring 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.627 02:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.537 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:09.537 00:12:09.537 real 0m45.422s 00:12:09.537 user 2m26.769s 00:12:09.537 sys 0m7.652s 00:12:09.537 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.537 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:09.537 ************************************ 00:12:09.537 END TEST nvmf_filesystem 00:12:09.537 ************************************ 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:09.797 ************************************ 00:12:09.797 START TEST nvmf_target_discovery 00:12:09.797 ************************************ 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:09.797 * Looking for test storage... 00:12:09.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:09.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.797 --rc genhtml_branch_coverage=1 00:12:09.797 --rc genhtml_function_coverage=1 00:12:09.797 --rc genhtml_legend=1 00:12:09.797 --rc geninfo_all_blocks=1 00:12:09.797 --rc geninfo_unexecuted_blocks=1 00:12:09.797 00:12:09.797 ' 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:09.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.797 --rc genhtml_branch_coverage=1 00:12:09.797 --rc genhtml_function_coverage=1 00:12:09.797 --rc genhtml_legend=1 00:12:09.797 --rc geninfo_all_blocks=1 00:12:09.797 --rc geninfo_unexecuted_blocks=1 00:12:09.797 00:12:09.797 ' 00:12:09.797 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:09.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.797 --rc genhtml_branch_coverage=1 00:12:09.797 --rc genhtml_function_coverage=1 00:12:09.798 --rc genhtml_legend=1 00:12:09.798 --rc geninfo_all_blocks=1 00:12:09.798 --rc geninfo_unexecuted_blocks=1 00:12:09.798 00:12:09.798 ' 00:12:09.798 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:09.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.798 --rc genhtml_branch_coverage=1 00:12:09.798 --rc genhtml_function_coverage=1 00:12:09.798 --rc genhtml_legend=1 00:12:09.798 --rc geninfo_all_blocks=1 00:12:09.798 --rc geninfo_unexecuted_blocks=1 00:12:09.798 00:12:09.798 ' 00:12:09.798 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:09.798 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:09.798 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.798 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.798 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.798 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.798 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.798 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.798 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.798 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.798 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.798 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:10.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.058 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:10.059 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:10.059 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:10.059 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.059 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.059 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.059 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:10.059 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:10.059 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:10.059 02:53:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:16.641 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:16.641 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:16.641 Found net devices under 0000:af:00.0: cvl_0_0 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:16.641 Found net devices under 0000:af:00.1: cvl_0_1 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:16.641 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:16.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:12:16.642 00:12:16.642 --- 10.0.0.2 ping statistics --- 00:12:16.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.642 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:16.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:12:16.642 00:12:16.642 --- 10.0.0.1 ping statistics --- 00:12:16.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.642 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=213192 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 213192 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 213192 ']' 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.642 02:53:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.642 [2024-12-14 02:53:30.956771] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:12:16.642 [2024-12-14 02:53:30.956811] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.642 [2024-12-14 02:53:31.016820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.642 [2024-12-14 02:53:31.038839] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.642 [2024-12-14 02:53:31.038877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.642 [2024-12-14 02:53:31.038884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.642 [2024-12-14 02:53:31.038889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.642 [2024-12-14 02:53:31.038894] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.642 [2024-12-14 02:53:31.040184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.642 [2024-12-14 02:53:31.040294] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.642 [2024-12-14 02:53:31.040402] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.642 [2024-12-14 02:53:31.040403] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.642 [2024-12-14 02:53:31.179904] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.642 Null1 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.642 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.643 [2024-12-14 02:53:31.245464] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.643 Null2 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.643 Null3 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.643 Null4 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.643 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:16.643 00:12:16.643 Discovery Log Number of Records 6, Generation counter 6 00:12:16.643 =====Discovery Log Entry 0====== 00:12:16.643 trtype: tcp 00:12:16.643 adrfam: ipv4 00:12:16.643 subtype: current discovery subsystem 00:12:16.643 treq: not required 00:12:16.643 portid: 0 00:12:16.643 trsvcid: 4420 00:12:16.643 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:16.643 traddr: 10.0.0.2 00:12:16.643 eflags: explicit discovery connections, duplicate discovery information 00:12:16.643 sectype: none 00:12:16.643 =====Discovery Log Entry 1====== 00:12:16.643 trtype: tcp 00:12:16.643 adrfam: ipv4 00:12:16.643 subtype: nvme subsystem 00:12:16.643 treq: not required 00:12:16.643 portid: 0 00:12:16.643 trsvcid: 4420 00:12:16.643 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:16.643 traddr: 10.0.0.2 00:12:16.643 eflags: none 00:12:16.643 sectype: none 00:12:16.643 =====Discovery Log Entry 2====== 00:12:16.643 trtype: tcp 00:12:16.643 adrfam: ipv4 00:12:16.643 subtype: nvme subsystem 00:12:16.643 treq: not required 00:12:16.643 portid: 0 00:12:16.643 trsvcid: 4420 00:12:16.643 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:16.643 traddr: 10.0.0.2 00:12:16.643 eflags: none 00:12:16.643 sectype: none 00:12:16.643 =====Discovery Log Entry 3====== 00:12:16.643 trtype: tcp 00:12:16.643 adrfam: ipv4 00:12:16.643 subtype: nvme subsystem 00:12:16.643 treq: not required 00:12:16.643 portid: 0 00:12:16.643 trsvcid: 4420 00:12:16.643 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:16.643 traddr: 10.0.0.2 00:12:16.643 eflags: none 00:12:16.643 sectype: none 00:12:16.643 =====Discovery Log Entry 4====== 00:12:16.643 trtype: tcp 00:12:16.643 adrfam: ipv4 00:12:16.643 subtype: nvme subsystem 00:12:16.643 treq: not required 00:12:16.643 portid: 0 00:12:16.643 trsvcid: 4420 00:12:16.643 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:16.643 traddr: 10.0.0.2 00:12:16.643 eflags: none 00:12:16.644 sectype: none 00:12:16.644 =====Discovery Log Entry 5====== 00:12:16.644 trtype: tcp 00:12:16.644 adrfam: ipv4 00:12:16.644 subtype: discovery subsystem referral 00:12:16.644 treq: not required 00:12:16.644 portid: 0 00:12:16.644 trsvcid: 4430 00:12:16.644 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:16.644 traddr: 10.0.0.2 00:12:16.644 eflags: none 00:12:16.644 sectype: none 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:16.644 Perform nvmf subsystem discovery via RPC 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.644 [ 00:12:16.644 { 00:12:16.644 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:16.644 "subtype": "Discovery", 00:12:16.644 "listen_addresses": [ 00:12:16.644 { 00:12:16.644 "trtype": "TCP", 00:12:16.644 "adrfam": "IPv4", 00:12:16.644 "traddr": "10.0.0.2", 00:12:16.644 "trsvcid": "4420" 00:12:16.644 } 00:12:16.644 ], 00:12:16.644 "allow_any_host": true, 00:12:16.644 "hosts": [] 00:12:16.644 }, 00:12:16.644 { 00:12:16.644 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:16.644 "subtype": "NVMe", 00:12:16.644 "listen_addresses": [ 00:12:16.644 { 00:12:16.644 "trtype": "TCP", 00:12:16.644 "adrfam": "IPv4", 00:12:16.644 "traddr": "10.0.0.2", 00:12:16.644 "trsvcid": "4420" 00:12:16.644 } 00:12:16.644 ], 00:12:16.644 "allow_any_host": true, 00:12:16.644 "hosts": [], 00:12:16.644 "serial_number": "SPDK00000000000001", 00:12:16.644 "model_number": "SPDK bdev Controller", 00:12:16.644 "max_namespaces": 32, 00:12:16.644 "min_cntlid": 1, 00:12:16.644 "max_cntlid": 65519, 00:12:16.644 "namespaces": [ 00:12:16.644 { 00:12:16.644 "nsid": 1, 00:12:16.644 "bdev_name": "Null1", 00:12:16.644 "name": "Null1", 00:12:16.644 "nguid": "8FE5F90020FA4CB69078AA2519EE25CD", 00:12:16.644 "uuid": "8fe5f900-20fa-4cb6-9078-aa2519ee25cd" 00:12:16.644 } 00:12:16.644 ] 00:12:16.644 }, 00:12:16.644 { 00:12:16.644 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:16.644 "subtype": "NVMe", 00:12:16.644 "listen_addresses": [ 00:12:16.644 { 00:12:16.644 "trtype": "TCP", 00:12:16.644 "adrfam": "IPv4", 00:12:16.644 "traddr": "10.0.0.2", 00:12:16.644 "trsvcid": "4420" 00:12:16.644 } 00:12:16.644 ], 00:12:16.644 "allow_any_host": true, 00:12:16.644 "hosts": [], 00:12:16.644 "serial_number": "SPDK00000000000002", 00:12:16.644 "model_number": "SPDK bdev Controller", 00:12:16.644 "max_namespaces": 32, 00:12:16.644 "min_cntlid": 1, 00:12:16.644 "max_cntlid": 65519, 00:12:16.644 "namespaces": [ 00:12:16.644 { 00:12:16.644 "nsid": 1, 00:12:16.644 "bdev_name": "Null2", 00:12:16.644 "name": "Null2", 00:12:16.644 "nguid": "7EDFDF35E7C04D1CB9B2E891EDA8D353", 00:12:16.644 "uuid": "7edfdf35-e7c0-4d1c-b9b2-e891eda8d353" 00:12:16.644 } 00:12:16.644 ] 00:12:16.644 }, 00:12:16.644 { 00:12:16.644 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:16.644 "subtype": "NVMe", 00:12:16.644 "listen_addresses": [ 00:12:16.644 { 00:12:16.644 "trtype": "TCP", 00:12:16.644 "adrfam": "IPv4", 00:12:16.644 "traddr": "10.0.0.2", 00:12:16.644 "trsvcid": "4420" 00:12:16.644 } 00:12:16.644 ], 00:12:16.644 "allow_any_host": true, 00:12:16.644 "hosts": [], 00:12:16.644 "serial_number": "SPDK00000000000003", 00:12:16.644 "model_number": "SPDK bdev Controller", 00:12:16.644 "max_namespaces": 32, 00:12:16.644 "min_cntlid": 1, 00:12:16.644 "max_cntlid": 65519, 00:12:16.644 "namespaces": [ 00:12:16.644 { 00:12:16.644 "nsid": 1, 00:12:16.644 "bdev_name": "Null3", 00:12:16.644 "name": "Null3", 00:12:16.644 "nguid": "30E3221F5AF0415CB780AC0B8A4E3A05", 00:12:16.644 "uuid": "30e3221f-5af0-415c-b780-ac0b8a4e3a05" 00:12:16.644 } 00:12:16.644 ] 00:12:16.644 }, 00:12:16.644 { 00:12:16.644 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:16.644 "subtype": "NVMe", 00:12:16.644 "listen_addresses": [ 00:12:16.644 { 00:12:16.644 "trtype": "TCP", 00:12:16.644 "adrfam": "IPv4", 00:12:16.644 "traddr": "10.0.0.2", 00:12:16.644 "trsvcid": "4420" 00:12:16.644 } 00:12:16.644 ], 00:12:16.644 "allow_any_host": true, 00:12:16.644 "hosts": [], 00:12:16.644 "serial_number": "SPDK00000000000004", 00:12:16.644 "model_number": "SPDK bdev Controller", 00:12:16.644 "max_namespaces": 32, 00:12:16.644 "min_cntlid": 1, 00:12:16.644 "max_cntlid": 65519, 00:12:16.644 "namespaces": [ 00:12:16.644 { 00:12:16.644 "nsid": 1, 00:12:16.644 "bdev_name": "Null4", 00:12:16.644 "name": "Null4", 00:12:16.644 "nguid": "AFCBEC35E1F64F83BFF52BF64D3F2709", 00:12:16.644 "uuid": "afcbec35-e1f6-4f83-bff5-2bf64d3f2709" 00:12:16.644 } 00:12:16.644 ] 00:12:16.644 } 00:12:16.644 ] 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:16.644 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:16.645 rmmod nvme_tcp 00:12:16.645 rmmod nvme_fabrics 00:12:16.645 rmmod nvme_keyring 00:12:16.645 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:16.905 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:16.905 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:16.905 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 213192 ']' 00:12:16.905 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 213192 00:12:16.905 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 213192 ']' 00:12:16.906 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 213192 00:12:16.906 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:16.906 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.906 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 213192 00:12:16.906 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:16.906 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:16.906 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 213192' 00:12:16.906 killing process with pid 213192 00:12:16.906 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 213192 00:12:16.906 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 213192 00:12:16.906 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:16.906 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:16.906 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:16.906 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:16.906 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:16.906 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:16.906 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:16.906 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:16.906 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:16.906 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.906 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.906 02:53:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:19.448 00:12:19.448 real 0m9.334s 00:12:19.448 user 0m5.670s 00:12:19.448 sys 0m4.765s 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.448 ************************************ 00:12:19.448 END TEST nvmf_target_discovery 00:12:19.448 ************************************ 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:19.448 ************************************ 00:12:19.448 START TEST nvmf_referrals 00:12:19.448 ************************************ 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:19.448 * Looking for test storage... 00:12:19.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:19.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.448 --rc genhtml_branch_coverage=1 00:12:19.448 --rc genhtml_function_coverage=1 00:12:19.448 --rc genhtml_legend=1 00:12:19.448 --rc geninfo_all_blocks=1 00:12:19.448 --rc geninfo_unexecuted_blocks=1 00:12:19.448 00:12:19.448 ' 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:19.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.448 --rc genhtml_branch_coverage=1 00:12:19.448 --rc genhtml_function_coverage=1 00:12:19.448 --rc genhtml_legend=1 00:12:19.448 --rc geninfo_all_blocks=1 00:12:19.448 --rc geninfo_unexecuted_blocks=1 00:12:19.448 00:12:19.448 ' 00:12:19.448 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:19.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.449 --rc genhtml_branch_coverage=1 00:12:19.449 --rc genhtml_function_coverage=1 00:12:19.449 --rc genhtml_legend=1 00:12:19.449 --rc geninfo_all_blocks=1 00:12:19.449 --rc geninfo_unexecuted_blocks=1 00:12:19.449 00:12:19.449 ' 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:19.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.449 --rc genhtml_branch_coverage=1 00:12:19.449 --rc genhtml_function_coverage=1 00:12:19.449 --rc genhtml_legend=1 00:12:19.449 --rc geninfo_all_blocks=1 00:12:19.449 --rc geninfo_unexecuted_blocks=1 00:12:19.449 00:12:19.449 ' 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:19.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:19.449 02:53:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:26.027 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:26.027 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:26.028 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:26.028 Found net devices under 0000:af:00.0: cvl_0_0 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:26.028 Found net devices under 0000:af:00.1: cvl_0_1 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.028 02:53:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:26.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:12:26.028 00:12:26.028 --- 10.0.0.2 ping statistics --- 00:12:26.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.028 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:12:26.028 00:12:26.028 --- 10.0.0.1 ping statistics --- 00:12:26.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.028 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=216903 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 216903 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 216903 ']' 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.028 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.028 [2024-12-14 02:53:40.328253] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:12:26.029 [2024-12-14 02:53:40.328294] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.029 [2024-12-14 02:53:40.406669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.029 [2024-12-14 02:53:40.428948] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.029 [2024-12-14 02:53:40.428986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.029 [2024-12-14 02:53:40.428993] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.029 [2024-12-14 02:53:40.428999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.029 [2024-12-14 02:53:40.429004] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.029 [2024-12-14 02:53:40.430444] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.029 [2024-12-14 02:53:40.430553] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.029 [2024-12-14 02:53:40.430635] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.029 [2024-12-14 02:53:40.430635] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.029 [2024-12-14 02:53:40.574833] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.029 [2024-12-14 02:53:40.606463] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.029 02:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.029 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:26.029 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:26.029 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:26.029 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:26.029 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.029 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:26.029 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:26.289 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:26.548 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:26.548 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:26.548 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:26.548 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:26.548 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:26.548 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.549 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:26.807 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:26.807 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:26.807 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:26.807 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:26.807 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.807 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:26.808 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:26.808 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:26.808 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.808 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.808 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.808 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:26.808 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:26.808 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:26.808 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:26.808 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.808 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:26.808 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.808 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.068 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:27.068 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:27.068 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:27.068 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:27.068 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:27.068 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:27.068 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:27.068 02:53:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:27.068 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:27.068 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:27.068 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:27.068 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:27.068 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:27.068 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:27.068 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:27.327 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:27.327 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:27.327 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:27.327 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:27.327 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:27.327 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:27.587 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:27.587 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:27.587 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.587 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.587 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.587 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:27.587 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:27.587 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.587 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.587 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.587 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:27.587 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:27.587 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:27.587 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:27.587 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:27.587 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:27.587 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:27.846 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:27.846 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:27.846 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:27.846 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:27.846 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:27.846 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:27.846 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:27.846 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:27.846 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:27.846 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:27.846 rmmod nvme_tcp 00:12:27.846 rmmod nvme_fabrics 00:12:27.846 rmmod nvme_keyring 00:12:27.846 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:27.846 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:27.847 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:27.847 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 216903 ']' 00:12:27.847 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 216903 00:12:27.847 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 216903 ']' 00:12:27.847 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 216903 00:12:27.847 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:27.847 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.847 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 216903 00:12:27.847 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.847 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.847 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 216903' 00:12:27.847 killing process with pid 216903 00:12:27.847 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 216903 00:12:27.847 02:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 216903 00:12:28.106 02:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:28.107 02:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:28.107 02:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:28.107 02:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:28.107 02:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:28.107 02:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:28.107 02:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:28.107 02:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:28.107 02:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:28.107 02:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.107 02:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.107 02:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.017 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:30.017 00:12:30.017 real 0m10.943s 00:12:30.017 user 0m12.821s 00:12:30.017 sys 0m5.159s 00:12:30.017 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.017 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.017 ************************************ 00:12:30.017 END TEST nvmf_referrals 00:12:30.017 ************************************ 00:12:30.017 02:53:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:30.017 02:53:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:30.017 02:53:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.018 02:53:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:30.278 ************************************ 00:12:30.278 START TEST nvmf_connect_disconnect 00:12:30.278 ************************************ 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:30.278 * Looking for test storage... 00:12:30.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:30.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.278 --rc genhtml_branch_coverage=1 00:12:30.278 --rc genhtml_function_coverage=1 00:12:30.278 --rc genhtml_legend=1 00:12:30.278 --rc geninfo_all_blocks=1 00:12:30.278 --rc geninfo_unexecuted_blocks=1 00:12:30.278 00:12:30.278 ' 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:30.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.278 --rc genhtml_branch_coverage=1 00:12:30.278 --rc genhtml_function_coverage=1 00:12:30.278 --rc genhtml_legend=1 00:12:30.278 --rc geninfo_all_blocks=1 00:12:30.278 --rc geninfo_unexecuted_blocks=1 00:12:30.278 00:12:30.278 ' 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:30.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.278 --rc genhtml_branch_coverage=1 00:12:30.278 --rc genhtml_function_coverage=1 00:12:30.278 --rc genhtml_legend=1 00:12:30.278 --rc geninfo_all_blocks=1 00:12:30.278 --rc geninfo_unexecuted_blocks=1 00:12:30.278 00:12:30.278 ' 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:30.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.278 --rc genhtml_branch_coverage=1 00:12:30.278 --rc genhtml_function_coverage=1 00:12:30.278 --rc genhtml_legend=1 00:12:30.278 --rc geninfo_all_blocks=1 00:12:30.278 --rc geninfo_unexecuted_blocks=1 00:12:30.278 00:12:30.278 ' 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.278 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:30.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:30.279 02:53:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:36.852 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:36.852 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:36.852 Found net devices under 0000:af:00.0: cvl_0_0 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:36.852 Found net devices under 0000:af:00.1: cvl_0_1 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.852 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:36.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:12:36.853 00:12:36.853 --- 10.0.0.2 ping statistics --- 00:12:36.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.853 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:12:36.853 00:12:36.853 --- 10.0.0.1 ping statistics --- 00:12:36.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.853 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=220910 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 220910 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 220910 ']' 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.853 [2024-12-14 02:53:51.478772] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:12:36.853 [2024-12-14 02:53:51.478817] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.853 [2024-12-14 02:53:51.540310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:36.853 [2024-12-14 02:53:51.562875] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.853 [2024-12-14 02:53:51.562915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.853 [2024-12-14 02:53:51.562922] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.853 [2024-12-14 02:53:51.562928] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.853 [2024-12-14 02:53:51.562933] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.853 [2024-12-14 02:53:51.564350] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.853 [2024-12-14 02:53:51.564457] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.853 [2024-12-14 02:53:51.564566] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.853 [2024-12-14 02:53:51.564567] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.853 [2024-12-14 02:53:51.708269] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.853 [2024-12-14 02:53:51.770817] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:36.853 02:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:39.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.599 [2024-12-14 02:56:59.449225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190080 is same with the state(6) to be set 00:15:44.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.349 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:28.349 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:28.349 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:28.349 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:28.349 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:28.349 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:28.349 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:28.349 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:28.349 rmmod nvme_tcp 00:16:28.349 rmmod nvme_fabrics 00:16:28.349 rmmod nvme_keyring 00:16:28.349 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:28.349 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:28.349 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:28.349 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 220910 ']' 00:16:28.349 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 220910 00:16:28.349 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 220910 ']' 00:16:28.349 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 220910 00:16:28.349 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:28.349 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.349 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 220910 00:16:28.349 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:28.349 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:28.349 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 220910' 00:16:28.349 killing process with pid 220910 00:16:28.349 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 220910 00:16:28.349 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 220910 00:16:28.609 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:28.609 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:28.609 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:28.609 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:28.609 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:28.609 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:28.609 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:28.609 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:28.609 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:28.609 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.609 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.609 02:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.516 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:30.516 00:16:30.516 real 4m0.441s 00:16:30.516 user 15m18.541s 00:16:30.516 sys 0m24.429s 00:16:30.516 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.516 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:30.516 ************************************ 00:16:30.516 END TEST nvmf_connect_disconnect 00:16:30.516 ************************************ 00:16:30.516 02:57:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:30.516 02:57:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:30.516 02:57:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.516 02:57:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:30.775 ************************************ 00:16:30.775 START TEST nvmf_multitarget 00:16:30.775 ************************************ 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:30.775 * Looking for test storage... 00:16:30.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:30.775 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:30.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.776 --rc genhtml_branch_coverage=1 00:16:30.776 --rc genhtml_function_coverage=1 00:16:30.776 --rc genhtml_legend=1 00:16:30.776 --rc geninfo_all_blocks=1 00:16:30.776 --rc geninfo_unexecuted_blocks=1 00:16:30.776 00:16:30.776 ' 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:30.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.776 --rc genhtml_branch_coverage=1 00:16:30.776 --rc genhtml_function_coverage=1 00:16:30.776 --rc genhtml_legend=1 00:16:30.776 --rc geninfo_all_blocks=1 00:16:30.776 --rc geninfo_unexecuted_blocks=1 00:16:30.776 00:16:30.776 ' 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:30.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.776 --rc genhtml_branch_coverage=1 00:16:30.776 --rc genhtml_function_coverage=1 00:16:30.776 --rc genhtml_legend=1 00:16:30.776 --rc geninfo_all_blocks=1 00:16:30.776 --rc geninfo_unexecuted_blocks=1 00:16:30.776 00:16:30.776 ' 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:30.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.776 --rc genhtml_branch_coverage=1 00:16:30.776 --rc genhtml_function_coverage=1 00:16:30.776 --rc genhtml_legend=1 00:16:30.776 --rc geninfo_all_blocks=1 00:16:30.776 --rc geninfo_unexecuted_blocks=1 00:16:30.776 00:16:30.776 ' 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:30.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:30.776 02:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:37.348 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:37.348 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:37.349 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:37.349 Found net devices under 0000:af:00.0: cvl_0_0 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:37.349 Found net devices under 0000:af:00.1: cvl_0_1 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:37.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:37.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:16:37.349 00:16:37.349 --- 10.0.0.2 ping statistics --- 00:16:37.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.349 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:37.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:37.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:16:37.349 00:16:37.349 --- 10.0.0.1 ping statistics --- 00:16:37.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.349 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=264284 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 264284 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 264284 ']' 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:37.349 02:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:37.349 [2024-12-14 02:57:51.815596] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:37.349 [2024-12-14 02:57:51.815648] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.349 [2024-12-14 02:57:51.896337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:37.349 [2024-12-14 02:57:51.919073] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:37.349 [2024-12-14 02:57:51.919112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:37.349 [2024-12-14 02:57:51.919120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:37.349 [2024-12-14 02:57:51.919126] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:37.349 [2024-12-14 02:57:51.919131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:37.349 [2024-12-14 02:57:51.920420] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.349 [2024-12-14 02:57:51.920531] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.349 [2024-12-14 02:57:51.920639] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.349 [2024-12-14 02:57:51.920640] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:37.349 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:37.349 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:37.349 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:37.349 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:37.349 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:37.349 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.349 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:37.350 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:37.350 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:37.350 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:37.350 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:37.350 "nvmf_tgt_1" 00:16:37.350 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:37.350 "nvmf_tgt_2" 00:16:37.350 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:37.350 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:37.609 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:37.609 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:37.609 true 00:16:37.609 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:37.609 true 00:16:37.609 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:37.609 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:37.868 rmmod nvme_tcp 00:16:37.868 rmmod nvme_fabrics 00:16:37.868 rmmod nvme_keyring 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 264284 ']' 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 264284 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 264284 ']' 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 264284 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 264284 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 264284' 00:16:37.868 killing process with pid 264284 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 264284 00:16:37.868 02:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 264284 00:16:38.128 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:38.128 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:38.128 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:38.128 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:38.128 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:38.128 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:38.128 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:38.128 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:38.128 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:38.128 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.128 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.128 02:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.666 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:40.666 00:16:40.666 real 0m9.501s 00:16:40.666 user 0m7.237s 00:16:40.666 sys 0m4.808s 00:16:40.666 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.666 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:40.666 ************************************ 00:16:40.666 END TEST nvmf_multitarget 00:16:40.666 ************************************ 00:16:40.666 02:57:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:40.666 02:57:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:40.666 02:57:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.666 02:57:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:40.666 ************************************ 00:16:40.666 START TEST nvmf_rpc 00:16:40.666 ************************************ 00:16:40.666 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:40.666 * Looking for test storage... 00:16:40.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:40.666 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:40.666 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:16:40.666 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:40.666 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:40.666 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:40.666 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:40.666 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:40.666 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:40.666 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:40.666 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:40.666 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:40.666 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:40.666 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:40.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.667 --rc genhtml_branch_coverage=1 00:16:40.667 --rc genhtml_function_coverage=1 00:16:40.667 --rc genhtml_legend=1 00:16:40.667 --rc geninfo_all_blocks=1 00:16:40.667 --rc geninfo_unexecuted_blocks=1 00:16:40.667 00:16:40.667 ' 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:40.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.667 --rc genhtml_branch_coverage=1 00:16:40.667 --rc genhtml_function_coverage=1 00:16:40.667 --rc genhtml_legend=1 00:16:40.667 --rc geninfo_all_blocks=1 00:16:40.667 --rc geninfo_unexecuted_blocks=1 00:16:40.667 00:16:40.667 ' 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:40.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.667 --rc genhtml_branch_coverage=1 00:16:40.667 --rc genhtml_function_coverage=1 00:16:40.667 --rc genhtml_legend=1 00:16:40.667 --rc geninfo_all_blocks=1 00:16:40.667 --rc geninfo_unexecuted_blocks=1 00:16:40.667 00:16:40.667 ' 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:40.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.667 --rc genhtml_branch_coverage=1 00:16:40.667 --rc genhtml_function_coverage=1 00:16:40.667 --rc genhtml_legend=1 00:16:40.667 --rc geninfo_all_blocks=1 00:16:40.667 --rc geninfo_unexecuted_blocks=1 00:16:40.667 00:16:40.667 ' 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:40.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:40.667 02:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:45.946 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:45.946 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:45.946 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:45.947 Found net devices under 0000:af:00.0: cvl_0_0 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:45.947 Found net devices under 0000:af:00.1: cvl_0_1 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:45.947 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:46.206 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:46.206 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:46.206 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:46.206 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:46.206 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:46.465 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:46.465 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:46.465 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:46.465 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:46.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:16:46.465 00:16:46.465 --- 10.0.0.2 ping statistics --- 00:16:46.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.465 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:16:46.466 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:46.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:16:46.466 00:16:46.466 --- 10.0.0.1 ping statistics --- 00:16:46.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.466 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:16:46.466 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.466 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:46.466 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:46.466 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.466 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:46.466 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:46.466 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.466 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:46.466 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:46.466 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:46.466 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:46.466 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:46.466 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.466 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=268002 00:16:46.466 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 268002 00:16:46.466 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:46.466 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 268002 ']' 00:16:46.466 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.466 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.466 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.466 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.466 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.466 [2024-12-14 02:58:01.478139] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:46.466 [2024-12-14 02:58:01.478184] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.466 [2024-12-14 02:58:01.555587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:46.466 [2024-12-14 02:58:01.578903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.466 [2024-12-14 02:58:01.578938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.466 [2024-12-14 02:58:01.578946] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.466 [2024-12-14 02:58:01.578951] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.466 [2024-12-14 02:58:01.578956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.466 [2024-12-14 02:58:01.582330] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.466 [2024-12-14 02:58:01.582361] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.466 [2024-12-14 02:58:01.582467] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.466 [2024-12-14 02:58:01.582468] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.724 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.724 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:46.724 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:46.724 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:46.724 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.724 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.724 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:46.724 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.724 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.724 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.724 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:46.724 "tick_rate": 2100000000, 00:16:46.724 "poll_groups": [ 00:16:46.724 { 00:16:46.724 "name": "nvmf_tgt_poll_group_000", 00:16:46.724 "admin_qpairs": 0, 00:16:46.724 "io_qpairs": 0, 00:16:46.724 "current_admin_qpairs": 0, 00:16:46.724 "current_io_qpairs": 0, 00:16:46.724 "pending_bdev_io": 0, 00:16:46.724 "completed_nvme_io": 0, 00:16:46.724 "transports": [] 00:16:46.724 }, 00:16:46.724 { 00:16:46.724 "name": "nvmf_tgt_poll_group_001", 00:16:46.724 "admin_qpairs": 0, 00:16:46.724 "io_qpairs": 0, 00:16:46.724 "current_admin_qpairs": 0, 00:16:46.724 "current_io_qpairs": 0, 00:16:46.725 "pending_bdev_io": 0, 00:16:46.725 "completed_nvme_io": 0, 00:16:46.725 "transports": [] 00:16:46.725 }, 00:16:46.725 { 00:16:46.725 "name": "nvmf_tgt_poll_group_002", 00:16:46.725 "admin_qpairs": 0, 00:16:46.725 "io_qpairs": 0, 00:16:46.725 "current_admin_qpairs": 0, 00:16:46.725 "current_io_qpairs": 0, 00:16:46.725 "pending_bdev_io": 0, 00:16:46.725 "completed_nvme_io": 0, 00:16:46.725 "transports": [] 00:16:46.725 }, 00:16:46.725 { 00:16:46.725 "name": "nvmf_tgt_poll_group_003", 00:16:46.725 "admin_qpairs": 0, 00:16:46.725 "io_qpairs": 0, 00:16:46.725 "current_admin_qpairs": 0, 00:16:46.725 "current_io_qpairs": 0, 00:16:46.725 "pending_bdev_io": 0, 00:16:46.725 "completed_nvme_io": 0, 00:16:46.725 "transports": [] 00:16:46.725 } 00:16:46.725 ] 00:16:46.725 }' 00:16:46.725 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:46.725 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:46.725 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:46.725 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:46.725 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:46.725 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:46.725 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:46.725 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:46.725 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.725 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.725 [2024-12-14 02:58:01.823392] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.725 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.725 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:46.725 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.725 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.725 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.725 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:46.725 "tick_rate": 2100000000, 00:16:46.725 "poll_groups": [ 00:16:46.725 { 00:16:46.725 "name": "nvmf_tgt_poll_group_000", 00:16:46.725 "admin_qpairs": 0, 00:16:46.725 "io_qpairs": 0, 00:16:46.725 "current_admin_qpairs": 0, 00:16:46.725 "current_io_qpairs": 0, 00:16:46.725 "pending_bdev_io": 0, 00:16:46.725 "completed_nvme_io": 0, 00:16:46.725 "transports": [ 00:16:46.725 { 00:16:46.725 "trtype": "TCP" 00:16:46.725 } 00:16:46.725 ] 00:16:46.725 }, 00:16:46.725 { 00:16:46.725 "name": "nvmf_tgt_poll_group_001", 00:16:46.725 "admin_qpairs": 0, 00:16:46.725 "io_qpairs": 0, 00:16:46.725 "current_admin_qpairs": 0, 00:16:46.725 "current_io_qpairs": 0, 00:16:46.725 "pending_bdev_io": 0, 00:16:46.725 "completed_nvme_io": 0, 00:16:46.725 "transports": [ 00:16:46.725 { 00:16:46.725 "trtype": "TCP" 00:16:46.725 } 00:16:46.725 ] 00:16:46.725 }, 00:16:46.725 { 00:16:46.725 "name": "nvmf_tgt_poll_group_002", 00:16:46.725 "admin_qpairs": 0, 00:16:46.725 "io_qpairs": 0, 00:16:46.725 "current_admin_qpairs": 0, 00:16:46.725 "current_io_qpairs": 0, 00:16:46.725 "pending_bdev_io": 0, 00:16:46.725 "completed_nvme_io": 0, 00:16:46.725 "transports": [ 00:16:46.725 { 00:16:46.725 "trtype": "TCP" 00:16:46.725 } 00:16:46.725 ] 00:16:46.725 }, 00:16:46.725 { 00:16:46.725 "name": "nvmf_tgt_poll_group_003", 00:16:46.725 "admin_qpairs": 0, 00:16:46.725 "io_qpairs": 0, 00:16:46.725 "current_admin_qpairs": 0, 00:16:46.725 "current_io_qpairs": 0, 00:16:46.725 "pending_bdev_io": 0, 00:16:46.725 "completed_nvme_io": 0, 00:16:46.725 "transports": [ 00:16:46.725 { 00:16:46.725 "trtype": "TCP" 00:16:46.725 } 00:16:46.725 ] 00:16:46.725 } 00:16:46.725 ] 00:16:46.725 }' 00:16:46.725 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:46.725 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:46.725 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:46.725 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:46.984 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:46.984 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:46.984 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:46.984 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:46.984 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:46.984 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:46.984 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:46.984 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:46.984 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:46.984 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:46.984 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.984 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.984 Malloc1 00:16:46.984 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.984 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:46.984 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.984 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.984 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.984 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:46.984 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.984 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.984 02:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.984 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:46.984 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.984 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.984 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.984 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.984 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.984 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.984 [2024-12-14 02:58:02.013005] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.984 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.984 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:46.984 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:46.984 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:46.984 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:46.984 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.984 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:46.984 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.984 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:46.984 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.984 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:46.985 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:46.985 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:46.985 [2024-12-14 02:58:02.041761] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:46.985 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:46.985 could not add new controller: failed to write to nvme-fabrics device 00:16:46.985 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:46.985 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:46.985 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:46.985 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:46.985 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:46.985 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.985 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.985 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.985 02:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:48.363 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:48.363 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:48.363 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:48.363 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:48.363 02:58:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:50.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:50.268 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:50.527 [2024-12-14 02:58:05.406781] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:50.527 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:50.527 could not add new controller: failed to write to nvme-fabrics device 00:16:50.527 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:50.527 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:50.527 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:50.527 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:50.527 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:50.527 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.527 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.527 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.527 02:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:51.464 02:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:51.464 02:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:51.464 02:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:51.464 02:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:51.464 02:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:54.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.001 [2024-12-14 02:58:08.729390] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.001 02:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:54.938 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:54.938 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:54.938 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:54.938 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:54.938 02:58:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:56.844 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:56.844 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:56.844 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:56.844 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:56.844 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:56.844 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:56.844 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:56.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.844 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:56.844 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:57.103 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:57.103 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.103 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:57.103 02:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.103 [2024-12-14 02:58:12.044741] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.103 02:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:58.481 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:58.481 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:58.481 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:58.481 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:58.481 02:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:00.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.386 [2024-12-14 02:58:15.481147] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.386 02:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:01.763 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:01.763 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:01.763 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:01.763 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:01.763 02:58:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:03.669 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:03.669 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:03.669 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:03.669 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:03.669 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:03.669 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:03.669 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:03.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.928 [2024-12-14 02:58:18.875921] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.928 02:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:05.307 02:58:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:05.307 02:58:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:05.307 02:58:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:05.307 02:58:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:05.307 02:58:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:07.211 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:07.211 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:07.211 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:07.211 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:07.211 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:07.211 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:07.211 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:07.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:07.211 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:07.211 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:07.211 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:07.211 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:07.211 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:07.211 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:07.211 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:07.211 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:07.211 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.211 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.471 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.471 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.471 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.471 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.471 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.471 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:07.471 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:07.471 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.471 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.471 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.471 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.471 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.471 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.471 [2024-12-14 02:58:22.371581] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.471 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.471 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:07.471 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.471 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.471 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.471 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:07.471 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.471 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.471 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.471 02:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:08.414 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:08.414 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:08.414 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:08.414 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:08.414 02:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:10.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.952 [2024-12-14 02:58:25.638646] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.952 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.953 [2024-12-14 02:58:25.690705] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.953 [2024-12-14 02:58:25.738847] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.953 [2024-12-14 02:58:25.787022] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.953 [2024-12-14 02:58:25.839205] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:10.953 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.954 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.954 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.954 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:10.954 "tick_rate": 2100000000, 00:17:10.954 "poll_groups": [ 00:17:10.954 { 00:17:10.954 "name": "nvmf_tgt_poll_group_000", 00:17:10.954 "admin_qpairs": 2, 00:17:10.954 "io_qpairs": 168, 00:17:10.954 "current_admin_qpairs": 0, 00:17:10.954 "current_io_qpairs": 0, 00:17:10.954 "pending_bdev_io": 0, 00:17:10.954 "completed_nvme_io": 219, 00:17:10.954 "transports": [ 00:17:10.954 { 00:17:10.954 "trtype": "TCP" 00:17:10.954 } 00:17:10.954 ] 00:17:10.954 }, 00:17:10.954 { 00:17:10.954 "name": "nvmf_tgt_poll_group_001", 00:17:10.954 "admin_qpairs": 2, 00:17:10.954 "io_qpairs": 168, 00:17:10.954 "current_admin_qpairs": 0, 00:17:10.954 "current_io_qpairs": 0, 00:17:10.954 "pending_bdev_io": 0, 00:17:10.954 "completed_nvme_io": 306, 00:17:10.954 "transports": [ 00:17:10.954 { 00:17:10.954 "trtype": "TCP" 00:17:10.954 } 00:17:10.954 ] 00:17:10.954 }, 00:17:10.954 { 00:17:10.954 "name": "nvmf_tgt_poll_group_002", 00:17:10.954 "admin_qpairs": 1, 00:17:10.954 "io_qpairs": 168, 00:17:10.954 "current_admin_qpairs": 0, 00:17:10.954 "current_io_qpairs": 0, 00:17:10.954 "pending_bdev_io": 0, 00:17:10.954 "completed_nvme_io": 230, 00:17:10.954 "transports": [ 00:17:10.954 { 00:17:10.954 "trtype": "TCP" 00:17:10.954 } 00:17:10.954 ] 00:17:10.954 }, 00:17:10.954 { 00:17:10.954 "name": "nvmf_tgt_poll_group_003", 00:17:10.954 "admin_qpairs": 2, 00:17:10.954 "io_qpairs": 168, 00:17:10.954 "current_admin_qpairs": 0, 00:17:10.954 "current_io_qpairs": 0, 00:17:10.954 "pending_bdev_io": 0, 00:17:10.954 "completed_nvme_io": 267, 00:17:10.954 "transports": [ 00:17:10.954 { 00:17:10.954 "trtype": "TCP" 00:17:10.954 } 00:17:10.954 ] 00:17:10.954 } 00:17:10.954 ] 00:17:10.954 }' 00:17:10.954 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:10.954 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:10.954 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:10.954 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:10.954 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:10.954 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:10.954 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:10.954 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:10.954 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:10.954 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:17:10.954 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:10.954 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:10.954 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:10.954 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:10.954 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:10.954 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:10.954 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:10.954 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:10.954 02:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:10.954 rmmod nvme_tcp 00:17:10.954 rmmod nvme_fabrics 00:17:10.954 rmmod nvme_keyring 00:17:10.954 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:10.954 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:10.954 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:10.954 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 268002 ']' 00:17:10.954 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 268002 00:17:10.954 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 268002 ']' 00:17:10.954 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 268002 00:17:10.954 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:10.954 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.954 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 268002 00:17:11.214 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:11.214 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:11.214 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 268002' 00:17:11.214 killing process with pid 268002 00:17:11.214 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 268002 00:17:11.214 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 268002 00:17:11.214 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:11.214 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:11.214 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:11.214 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:11.214 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:11.214 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:11.214 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:11.214 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:11.214 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:11.214 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.214 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:11.214 02:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:13.758 00:17:13.758 real 0m33.090s 00:17:13.758 user 1m40.210s 00:17:13.758 sys 0m6.342s 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.758 ************************************ 00:17:13.758 END TEST nvmf_rpc 00:17:13.758 ************************************ 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:13.758 ************************************ 00:17:13.758 START TEST nvmf_invalid 00:17:13.758 ************************************ 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:13.758 * Looking for test storage... 00:17:13.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:13.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.758 --rc genhtml_branch_coverage=1 00:17:13.758 --rc genhtml_function_coverage=1 00:17:13.758 --rc genhtml_legend=1 00:17:13.758 --rc geninfo_all_blocks=1 00:17:13.758 --rc geninfo_unexecuted_blocks=1 00:17:13.758 00:17:13.758 ' 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:13.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.758 --rc genhtml_branch_coverage=1 00:17:13.758 --rc genhtml_function_coverage=1 00:17:13.758 --rc genhtml_legend=1 00:17:13.758 --rc geninfo_all_blocks=1 00:17:13.758 --rc geninfo_unexecuted_blocks=1 00:17:13.758 00:17:13.758 ' 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:13.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.758 --rc genhtml_branch_coverage=1 00:17:13.758 --rc genhtml_function_coverage=1 00:17:13.758 --rc genhtml_legend=1 00:17:13.758 --rc geninfo_all_blocks=1 00:17:13.758 --rc geninfo_unexecuted_blocks=1 00:17:13.758 00:17:13.758 ' 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:13.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.758 --rc genhtml_branch_coverage=1 00:17:13.758 --rc genhtml_function_coverage=1 00:17:13.758 --rc genhtml_legend=1 00:17:13.758 --rc geninfo_all_blocks=1 00:17:13.758 --rc geninfo_unexecuted_blocks=1 00:17:13.758 00:17:13.758 ' 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:13.758 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:13.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:13.759 02:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:20.336 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:20.336 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:20.336 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:20.337 Found net devices under 0000:af:00.0: cvl_0_0 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:20.337 Found net devices under 0000:af:00.1: cvl_0_1 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:20.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:17:20.337 00:17:20.337 --- 10.0.0.2 ping statistics --- 00:17:20.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.337 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:20.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:17:20.337 00:17:20.337 --- 10.0.0.1 ping statistics --- 00:17:20.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.337 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=275650 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 275650 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 275650 ']' 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:20.337 [2024-12-14 02:58:34.602914] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:20.337 [2024-12-14 02:58:34.602954] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.337 [2024-12-14 02:58:34.680734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:20.337 [2024-12-14 02:58:34.702831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.337 [2024-12-14 02:58:34.702867] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.337 [2024-12-14 02:58:34.702876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.337 [2024-12-14 02:58:34.702882] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.337 [2024-12-14 02:58:34.702886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.337 [2024-12-14 02:58:34.704367] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.337 [2024-12-14 02:58:34.704409] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.337 [2024-12-14 02:58:34.704515] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.337 [2024-12-14 02:58:34.704516] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:20.337 02:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode3293 00:17:20.337 [2024-12-14 02:58:35.004871] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:20.337 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:20.337 { 00:17:20.337 "nqn": "nqn.2016-06.io.spdk:cnode3293", 00:17:20.337 "tgt_name": "foobar", 00:17:20.337 "method": "nvmf_create_subsystem", 00:17:20.337 "req_id": 1 00:17:20.337 } 00:17:20.337 Got JSON-RPC error response 00:17:20.337 response: 00:17:20.337 { 00:17:20.337 "code": -32603, 00:17:20.338 "message": "Unable to find target foobar" 00:17:20.338 }' 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:20.338 { 00:17:20.338 "nqn": "nqn.2016-06.io.spdk:cnode3293", 00:17:20.338 "tgt_name": "foobar", 00:17:20.338 "method": "nvmf_create_subsystem", 00:17:20.338 "req_id": 1 00:17:20.338 } 00:17:20.338 Got JSON-RPC error response 00:17:20.338 response: 00:17:20.338 { 00:17:20.338 "code": -32603, 00:17:20.338 "message": "Unable to find target foobar" 00:17:20.338 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13907 00:17:20.338 [2024-12-14 02:58:35.193508] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13907: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:20.338 { 00:17:20.338 "nqn": "nqn.2016-06.io.spdk:cnode13907", 00:17:20.338 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:20.338 "method": "nvmf_create_subsystem", 00:17:20.338 "req_id": 1 00:17:20.338 } 00:17:20.338 Got JSON-RPC error response 00:17:20.338 response: 00:17:20.338 { 00:17:20.338 "code": -32602, 00:17:20.338 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:20.338 }' 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:20.338 { 00:17:20.338 "nqn": "nqn.2016-06.io.spdk:cnode13907", 00:17:20.338 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:20.338 "method": "nvmf_create_subsystem", 00:17:20.338 "req_id": 1 00:17:20.338 } 00:17:20.338 Got JSON-RPC error response 00:17:20.338 response: 00:17:20.338 { 00:17:20.338 "code": -32602, 00:17:20.338 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:20.338 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31373 00:17:20.338 [2024-12-14 02:58:35.418229] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31373: invalid model number 'SPDK_Controller' 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:20.338 { 00:17:20.338 "nqn": "nqn.2016-06.io.spdk:cnode31373", 00:17:20.338 "model_number": "SPDK_Controller\u001f", 00:17:20.338 "method": "nvmf_create_subsystem", 00:17:20.338 "req_id": 1 00:17:20.338 } 00:17:20.338 Got JSON-RPC error response 00:17:20.338 response: 00:17:20.338 { 00:17:20.338 "code": -32602, 00:17:20.338 "message": "Invalid MN SPDK_Controller\u001f" 00:17:20.338 }' 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:20.338 { 00:17:20.338 "nqn": "nqn.2016-06.io.spdk:cnode31373", 00:17:20.338 "model_number": "SPDK_Controller\u001f", 00:17:20.338 "method": "nvmf_create_subsystem", 00:17:20.338 "req_id": 1 00:17:20.338 } 00:17:20.338 Got JSON-RPC error response 00:17:20.338 response: 00:17:20.338 { 00:17:20.338 "code": -32602, 00:17:20.338 "message": "Invalid MN SPDK_Controller\u001f" 00:17:20.338 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:20.338 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.598 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ r == \- ]] 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'reu{QV&HU:{2Yad^"aT]R' 00:17:20.599 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'reu{QV&HU:{2Yad^"aT]R' nqn.2016-06.io.spdk:cnode9459 00:17:20.859 [2024-12-14 02:58:35.763412] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9459: invalid serial number 'reu{QV&HU:{2Yad^"aT]R' 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:20.859 { 00:17:20.859 "nqn": "nqn.2016-06.io.spdk:cnode9459", 00:17:20.859 "serial_number": "reu{QV&HU:{2Yad^\"aT]R", 00:17:20.859 "method": "nvmf_create_subsystem", 00:17:20.859 "req_id": 1 00:17:20.859 } 00:17:20.859 Got JSON-RPC error response 00:17:20.859 response: 00:17:20.859 { 00:17:20.859 "code": -32602, 00:17:20.859 "message": "Invalid SN reu{QV&HU:{2Yad^\"aT]R" 00:17:20.859 }' 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:20.859 { 00:17:20.859 "nqn": "nqn.2016-06.io.spdk:cnode9459", 00:17:20.859 "serial_number": "reu{QV&HU:{2Yad^\"aT]R", 00:17:20.859 "method": "nvmf_create_subsystem", 00:17:20.859 "req_id": 1 00:17:20.859 } 00:17:20.859 Got JSON-RPC error response 00:17:20.859 response: 00:17:20.859 { 00:17:20.859 "code": -32602, 00:17:20.859 "message": "Invalid SN reu{QV&HU:{2Yad^\"aT]R" 00:17:20.859 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:20.859 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:20.860 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.120 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:21.120 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:21.120 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:21.120 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.120 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.120 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:21.120 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:21.120 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:21.120 02:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.120 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.120 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:21.120 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:21.120 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:21.120 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.120 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.120 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:21.120 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:21.120 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:21.120 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ f == \- ]] 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'fg^De:Ef^Sp25%zJz`jsfk\Blc2{^&v)ZT)9`@.mH' 00:17:21.121 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'fg^De:Ef^Sp25%zJz`jsfk\Blc2{^&v)ZT)9`@.mH' nqn.2016-06.io.spdk:cnode5515 00:17:21.121 [2024-12-14 02:58:36.236976] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5515: invalid model number 'fg^De:Ef^Sp25%zJz`jsfk\Blc2{^&v)ZT)9`@.mH' 00:17:21.380 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:21.380 { 00:17:21.380 "nqn": "nqn.2016-06.io.spdk:cnode5515", 00:17:21.380 "model_number": "fg^De:Ef^Sp25%zJz`jsfk\\Blc2{^&v)ZT)9`@.mH", 00:17:21.380 "method": "nvmf_create_subsystem", 00:17:21.380 "req_id": 1 00:17:21.380 } 00:17:21.380 Got JSON-RPC error response 00:17:21.380 response: 00:17:21.380 { 00:17:21.380 "code": -32602, 00:17:21.380 "message": "Invalid MN fg^De:Ef^Sp25%zJz`jsfk\\Blc2{^&v)ZT)9`@.mH" 00:17:21.380 }' 00:17:21.380 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:21.380 { 00:17:21.380 "nqn": "nqn.2016-06.io.spdk:cnode5515", 00:17:21.380 "model_number": "fg^De:Ef^Sp25%zJz`jsfk\\Blc2{^&v)ZT)9`@.mH", 00:17:21.380 "method": "nvmf_create_subsystem", 00:17:21.380 "req_id": 1 00:17:21.380 } 00:17:21.380 Got JSON-RPC error response 00:17:21.380 response: 00:17:21.380 { 00:17:21.380 "code": -32602, 00:17:21.380 "message": "Invalid MN fg^De:Ef^Sp25%zJz`jsfk\\Blc2{^&v)ZT)9`@.mH" 00:17:21.380 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:21.381 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:21.381 [2024-12-14 02:58:36.441712] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.381 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:21.640 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:21.640 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:21.640 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:21.640 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:21.640 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:21.899 [2024-12-14 02:58:36.867116] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:21.899 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:21.899 { 00:17:21.899 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:21.899 "listen_address": { 00:17:21.899 "trtype": "tcp", 00:17:21.899 "traddr": "", 00:17:21.899 "trsvcid": "4421" 00:17:21.899 }, 00:17:21.899 "method": "nvmf_subsystem_remove_listener", 00:17:21.899 "req_id": 1 00:17:21.899 } 00:17:21.899 Got JSON-RPC error response 00:17:21.899 response: 00:17:21.899 { 00:17:21.899 "code": -32602, 00:17:21.899 "message": "Invalid parameters" 00:17:21.899 }' 00:17:21.899 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:21.899 { 00:17:21.899 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:21.899 "listen_address": { 00:17:21.899 "trtype": "tcp", 00:17:21.899 "traddr": "", 00:17:21.899 "trsvcid": "4421" 00:17:21.899 }, 00:17:21.899 "method": "nvmf_subsystem_remove_listener", 00:17:21.899 "req_id": 1 00:17:21.899 } 00:17:21.899 Got JSON-RPC error response 00:17:21.899 response: 00:17:21.899 { 00:17:21.899 "code": -32602, 00:17:21.899 "message": "Invalid parameters" 00:17:21.899 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:21.899 02:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3565 -i 0 00:17:22.159 [2024-12-14 02:58:37.071748] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3565: invalid cntlid range [0-65519] 00:17:22.159 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:22.159 { 00:17:22.159 "nqn": "nqn.2016-06.io.spdk:cnode3565", 00:17:22.159 "min_cntlid": 0, 00:17:22.159 "method": "nvmf_create_subsystem", 00:17:22.159 "req_id": 1 00:17:22.159 } 00:17:22.159 Got JSON-RPC error response 00:17:22.159 response: 00:17:22.159 { 00:17:22.159 "code": -32602, 00:17:22.159 "message": "Invalid cntlid range [0-65519]" 00:17:22.159 }' 00:17:22.159 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:22.159 { 00:17:22.159 "nqn": "nqn.2016-06.io.spdk:cnode3565", 00:17:22.159 "min_cntlid": 0, 00:17:22.159 "method": "nvmf_create_subsystem", 00:17:22.159 "req_id": 1 00:17:22.159 } 00:17:22.159 Got JSON-RPC error response 00:17:22.159 response: 00:17:22.159 { 00:17:22.159 "code": -32602, 00:17:22.159 "message": "Invalid cntlid range [0-65519]" 00:17:22.159 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:22.159 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12432 -i 65520 00:17:22.159 [2024-12-14 02:58:37.268412] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12432: invalid cntlid range [65520-65519] 00:17:22.418 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:22.418 { 00:17:22.418 "nqn": "nqn.2016-06.io.spdk:cnode12432", 00:17:22.419 "min_cntlid": 65520, 00:17:22.419 "method": "nvmf_create_subsystem", 00:17:22.419 "req_id": 1 00:17:22.419 } 00:17:22.419 Got JSON-RPC error response 00:17:22.419 response: 00:17:22.419 { 00:17:22.419 "code": -32602, 00:17:22.419 "message": "Invalid cntlid range [65520-65519]" 00:17:22.419 }' 00:17:22.419 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:22.419 { 00:17:22.419 "nqn": "nqn.2016-06.io.spdk:cnode12432", 00:17:22.419 "min_cntlid": 65520, 00:17:22.419 "method": "nvmf_create_subsystem", 00:17:22.419 "req_id": 1 00:17:22.419 } 00:17:22.419 Got JSON-RPC error response 00:17:22.419 response: 00:17:22.419 { 00:17:22.419 "code": -32602, 00:17:22.419 "message": "Invalid cntlid range [65520-65519]" 00:17:22.419 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:22.419 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20797 -I 0 00:17:22.419 [2024-12-14 02:58:37.473100] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20797: invalid cntlid range [1-0] 00:17:22.419 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:22.419 { 00:17:22.419 "nqn": "nqn.2016-06.io.spdk:cnode20797", 00:17:22.419 "max_cntlid": 0, 00:17:22.419 "method": "nvmf_create_subsystem", 00:17:22.419 "req_id": 1 00:17:22.419 } 00:17:22.419 Got JSON-RPC error response 00:17:22.419 response: 00:17:22.419 { 00:17:22.419 "code": -32602, 00:17:22.419 "message": "Invalid cntlid range [1-0]" 00:17:22.419 }' 00:17:22.419 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:22.419 { 00:17:22.419 "nqn": "nqn.2016-06.io.spdk:cnode20797", 00:17:22.419 "max_cntlid": 0, 00:17:22.419 "method": "nvmf_create_subsystem", 00:17:22.419 "req_id": 1 00:17:22.419 } 00:17:22.419 Got JSON-RPC error response 00:17:22.419 response: 00:17:22.419 { 00:17:22.419 "code": -32602, 00:17:22.419 "message": "Invalid cntlid range [1-0]" 00:17:22.419 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:22.419 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23436 -I 65520 00:17:22.679 [2024-12-14 02:58:37.669790] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23436: invalid cntlid range [1-65520] 00:17:22.679 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:22.679 { 00:17:22.679 "nqn": "nqn.2016-06.io.spdk:cnode23436", 00:17:22.679 "max_cntlid": 65520, 00:17:22.679 "method": "nvmf_create_subsystem", 00:17:22.679 "req_id": 1 00:17:22.679 } 00:17:22.679 Got JSON-RPC error response 00:17:22.679 response: 00:17:22.679 { 00:17:22.679 "code": -32602, 00:17:22.679 "message": "Invalid cntlid range [1-65520]" 00:17:22.679 }' 00:17:22.679 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:22.679 { 00:17:22.679 "nqn": "nqn.2016-06.io.spdk:cnode23436", 00:17:22.679 "max_cntlid": 65520, 00:17:22.679 "method": "nvmf_create_subsystem", 00:17:22.679 "req_id": 1 00:17:22.679 } 00:17:22.679 Got JSON-RPC error response 00:17:22.679 response: 00:17:22.679 { 00:17:22.679 "code": -32602, 00:17:22.679 "message": "Invalid cntlid range [1-65520]" 00:17:22.679 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:22.679 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23642 -i 6 -I 5 00:17:22.938 [2024-12-14 02:58:37.862465] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23642: invalid cntlid range [6-5] 00:17:22.938 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:22.938 { 00:17:22.938 "nqn": "nqn.2016-06.io.spdk:cnode23642", 00:17:22.938 "min_cntlid": 6, 00:17:22.938 "max_cntlid": 5, 00:17:22.938 "method": "nvmf_create_subsystem", 00:17:22.938 "req_id": 1 00:17:22.938 } 00:17:22.938 Got JSON-RPC error response 00:17:22.938 response: 00:17:22.938 { 00:17:22.938 "code": -32602, 00:17:22.938 "message": "Invalid cntlid range [6-5]" 00:17:22.938 }' 00:17:22.938 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:22.938 { 00:17:22.938 "nqn": "nqn.2016-06.io.spdk:cnode23642", 00:17:22.938 "min_cntlid": 6, 00:17:22.938 "max_cntlid": 5, 00:17:22.938 "method": "nvmf_create_subsystem", 00:17:22.938 "req_id": 1 00:17:22.938 } 00:17:22.938 Got JSON-RPC error response 00:17:22.938 response: 00:17:22.938 { 00:17:22.938 "code": -32602, 00:17:22.938 "message": "Invalid cntlid range [6-5]" 00:17:22.938 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:22.939 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:22.939 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:22.939 { 00:17:22.939 "name": "foobar", 00:17:22.939 "method": "nvmf_delete_target", 00:17:22.939 "req_id": 1 00:17:22.939 } 00:17:22.939 Got JSON-RPC error response 00:17:22.939 response: 00:17:22.939 { 00:17:22.939 "code": -32602, 00:17:22.939 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:22.939 }' 00:17:22.939 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:22.939 { 00:17:22.939 "name": "foobar", 00:17:22.939 "method": "nvmf_delete_target", 00:17:22.939 "req_id": 1 00:17:22.939 } 00:17:22.939 Got JSON-RPC error response 00:17:22.939 response: 00:17:22.939 { 00:17:22.939 "code": -32602, 00:17:22.939 "message": "The specified target doesn't exist, cannot delete it." 00:17:22.939 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:22.939 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:22.939 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:22.939 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:22.939 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:22.939 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:22.939 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:22.939 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:22.939 02:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:22.939 rmmod nvme_tcp 00:17:22.939 rmmod nvme_fabrics 00:17:22.939 rmmod nvme_keyring 00:17:22.939 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:22.939 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:22.939 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:22.939 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 275650 ']' 00:17:22.939 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 275650 00:17:22.939 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 275650 ']' 00:17:22.939 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 275650 00:17:22.939 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:17:22.939 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.939 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275650 00:17:23.199 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:23.199 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:23.199 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275650' 00:17:23.199 killing process with pid 275650 00:17:23.199 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 275650 00:17:23.199 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 275650 00:17:23.199 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:23.199 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:23.199 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:23.199 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:23.199 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:17:23.199 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:23.199 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:17:23.199 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:23.199 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:23.199 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.199 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.199 02:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.739 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:25.739 00:17:25.739 real 0m11.913s 00:17:25.739 user 0m18.405s 00:17:25.739 sys 0m5.350s 00:17:25.739 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.739 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:25.739 ************************************ 00:17:25.739 END TEST nvmf_invalid 00:17:25.739 ************************************ 00:17:25.739 02:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:25.739 02:58:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:25.740 ************************************ 00:17:25.740 START TEST nvmf_connect_stress 00:17:25.740 ************************************ 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:25.740 * Looking for test storage... 00:17:25.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:25.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.740 --rc genhtml_branch_coverage=1 00:17:25.740 --rc genhtml_function_coverage=1 00:17:25.740 --rc genhtml_legend=1 00:17:25.740 --rc geninfo_all_blocks=1 00:17:25.740 --rc geninfo_unexecuted_blocks=1 00:17:25.740 00:17:25.740 ' 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:25.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.740 --rc genhtml_branch_coverage=1 00:17:25.740 --rc genhtml_function_coverage=1 00:17:25.740 --rc genhtml_legend=1 00:17:25.740 --rc geninfo_all_blocks=1 00:17:25.740 --rc geninfo_unexecuted_blocks=1 00:17:25.740 00:17:25.740 ' 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:25.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.740 --rc genhtml_branch_coverage=1 00:17:25.740 --rc genhtml_function_coverage=1 00:17:25.740 --rc genhtml_legend=1 00:17:25.740 --rc geninfo_all_blocks=1 00:17:25.740 --rc geninfo_unexecuted_blocks=1 00:17:25.740 00:17:25.740 ' 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:25.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.740 --rc genhtml_branch_coverage=1 00:17:25.740 --rc genhtml_function_coverage=1 00:17:25.740 --rc genhtml_legend=1 00:17:25.740 --rc geninfo_all_blocks=1 00:17:25.740 --rc geninfo_unexecuted_blocks=1 00:17:25.740 00:17:25.740 ' 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.740 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:25.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:25.741 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:25.741 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:25.741 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:25.741 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:25.741 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:25.741 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.741 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:25.741 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:25.741 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:25.741 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.741 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.741 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.741 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:25.741 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:25.741 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:25.741 02:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.316 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:32.316 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:32.316 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:32.316 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:32.316 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:32.316 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:32.316 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:32.316 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:32.316 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:32.316 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:32.316 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:32.316 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:32.316 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:32.316 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:32.316 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:32.316 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:32.316 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:32.316 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:32.316 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:32.316 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:32.316 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:32.316 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:32.316 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:32.317 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:32.317 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:32.317 Found net devices under 0000:af:00.0: cvl_0_0 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:32.317 Found net devices under 0000:af:00.1: cvl_0_1 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:32.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:32.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:17:32.317 00:17:32.317 --- 10.0.0.2 ping statistics --- 00:17:32.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.317 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:32.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:32.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:17:32.317 00:17:32.317 --- 10.0.0.1 ping statistics --- 00:17:32.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.317 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=279820 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 279820 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 279820 ']' 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.317 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.317 [2024-12-14 02:58:46.588126] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:32.318 [2024-12-14 02:58:46.588176] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.318 [2024-12-14 02:58:46.668575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:32.318 [2024-12-14 02:58:46.691022] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.318 [2024-12-14 02:58:46.691058] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.318 [2024-12-14 02:58:46.691065] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.318 [2024-12-14 02:58:46.691070] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.318 [2024-12-14 02:58:46.691076] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.318 [2024-12-14 02:58:46.692305] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.318 [2024-12-14 02:58:46.692414] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.318 [2024-12-14 02:58:46.692415] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.318 [2024-12-14 02:58:46.824164] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.318 [2024-12-14 02:58:46.848370] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.318 NULL1 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=279979 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.318 02:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.318 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.318 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:32.318 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.318 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.318 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.578 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.578 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:32.578 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.578 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.578 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.837 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.837 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:32.837 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.837 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.837 02:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.405 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.405 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:33.405 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.405 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.405 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.664 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.664 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:33.664 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.664 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.664 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.923 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.923 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:33.923 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.923 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.923 02:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.182 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.182 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:34.182 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.182 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.182 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.441 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.441 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:34.441 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.441 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.441 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:35.010 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.010 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:35.010 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.010 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.010 02:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:35.269 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.269 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:35.269 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.269 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.269 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:35.528 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.528 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:35.528 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.528 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.528 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:35.787 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.787 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:35.787 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.787 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.787 02:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:36.355 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.355 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:36.355 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:36.355 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.355 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:36.614 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.614 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:36.614 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:36.614 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.614 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:36.874 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.874 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:36.874 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:36.874 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.874 02:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:37.133 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.133 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:37.133 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:37.133 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.133 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:37.392 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.392 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:37.392 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:37.392 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.392 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:37.960 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.960 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:37.960 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:37.960 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.960 02:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.219 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.219 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:38.219 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:38.219 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.219 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.478 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.478 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:38.478 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:38.478 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.478 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.737 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.737 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:38.737 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:38.737 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.737 02:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.996 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.996 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:38.996 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:38.996 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.996 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:39.564 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.564 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:39.564 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:39.564 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.564 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:39.823 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.823 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:39.823 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:39.823 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.823 02:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.082 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.082 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:40.082 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:40.082 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.082 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.340 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.340 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:40.340 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:40.340 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.340 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.906 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.906 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:40.906 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:40.906 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.906 02:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.163 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.163 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:41.163 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.163 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.164 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.422 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.422 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:41.422 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.422 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.422 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.681 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.681 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:41.681 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.681 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.681 02:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.940 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:41.940 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.940 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279979 00:17:41.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (279979) - No such process 00:17:41.940 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 279979 00:17:41.940 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:41.940 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:41.940 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:41.940 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:41.940 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:41.940 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:41.940 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:41.940 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:41.940 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:41.940 rmmod nvme_tcp 00:17:41.940 rmmod nvme_fabrics 00:17:42.200 rmmod nvme_keyring 00:17:42.200 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:42.200 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:42.200 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:42.200 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 279820 ']' 00:17:42.200 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 279820 00:17:42.200 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 279820 ']' 00:17:42.200 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 279820 00:17:42.200 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:42.200 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:42.200 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 279820 00:17:42.200 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:42.200 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:42.200 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 279820' 00:17:42.200 killing process with pid 279820 00:17:42.200 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 279820 00:17:42.200 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 279820 00:17:42.200 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:42.200 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:42.200 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:42.200 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:42.200 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:42.200 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:42.200 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:42.459 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:42.459 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:42.459 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.459 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.459 02:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.365 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:44.365 00:17:44.365 real 0m18.991s 00:17:44.365 user 0m41.499s 00:17:44.365 sys 0m6.630s 00:17:44.365 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.365 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.365 ************************************ 00:17:44.365 END TEST nvmf_connect_stress 00:17:44.365 ************************************ 00:17:44.365 02:58:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:44.365 02:58:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:44.365 02:58:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.365 02:58:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:44.365 ************************************ 00:17:44.365 START TEST nvmf_fused_ordering 00:17:44.365 ************************************ 00:17:44.365 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:44.625 * Looking for test storage... 00:17:44.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:44.625 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:44.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.626 --rc genhtml_branch_coverage=1 00:17:44.626 --rc genhtml_function_coverage=1 00:17:44.626 --rc genhtml_legend=1 00:17:44.626 --rc geninfo_all_blocks=1 00:17:44.626 --rc geninfo_unexecuted_blocks=1 00:17:44.626 00:17:44.626 ' 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:44.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.626 --rc genhtml_branch_coverage=1 00:17:44.626 --rc genhtml_function_coverage=1 00:17:44.626 --rc genhtml_legend=1 00:17:44.626 --rc geninfo_all_blocks=1 00:17:44.626 --rc geninfo_unexecuted_blocks=1 00:17:44.626 00:17:44.626 ' 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:44.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.626 --rc genhtml_branch_coverage=1 00:17:44.626 --rc genhtml_function_coverage=1 00:17:44.626 --rc genhtml_legend=1 00:17:44.626 --rc geninfo_all_blocks=1 00:17:44.626 --rc geninfo_unexecuted_blocks=1 00:17:44.626 00:17:44.626 ' 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:44.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.626 --rc genhtml_branch_coverage=1 00:17:44.626 --rc genhtml_function_coverage=1 00:17:44.626 --rc genhtml_legend=1 00:17:44.626 --rc geninfo_all_blocks=1 00:17:44.626 --rc geninfo_unexecuted_blocks=1 00:17:44.626 00:17:44.626 ' 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:44.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:44.626 02:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:51.200 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:51.200 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:51.200 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:51.200 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:51.200 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:51.200 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:51.200 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:51.200 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:51.201 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:51.201 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:51.201 Found net devices under 0000:af:00.0: cvl_0_0 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:51.201 Found net devices under 0000:af:00.1: cvl_0_1 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:51.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:17:51.201 00:17:51.201 --- 10.0.0.2 ping statistics --- 00:17:51.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.201 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:17:51.201 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:51.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:17:51.201 00:17:51.201 --- 10.0.0.1 ping statistics --- 00:17:51.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.201 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=285032 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 285032 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 285032 ']' 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:51.202 [2024-12-14 02:59:05.597124] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:51.202 [2024-12-14 02:59:05.597167] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.202 [2024-12-14 02:59:05.674711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.202 [2024-12-14 02:59:05.694929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.202 [2024-12-14 02:59:05.694964] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.202 [2024-12-14 02:59:05.694972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.202 [2024-12-14 02:59:05.694978] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.202 [2024-12-14 02:59:05.694983] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.202 [2024-12-14 02:59:05.695489] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:51.202 [2024-12-14 02:59:05.837487] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:51.202 [2024-12-14 02:59:05.857663] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:51.202 NULL1 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.202 02:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:51.202 [2024-12-14 02:59:05.914808] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:51.202 [2024-12-14 02:59:05.914839] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid285051 ] 00:17:51.462 Attached to nqn.2016-06.io.spdk:cnode1 00:17:51.462 Namespace ID: 1 size: 1GB 00:17:51.462 fused_ordering(0) 00:17:51.462 fused_ordering(1) 00:17:51.462 fused_ordering(2) 00:17:51.462 fused_ordering(3) 00:17:51.462 fused_ordering(4) 00:17:51.462 fused_ordering(5) 00:17:51.462 fused_ordering(6) 00:17:51.462 fused_ordering(7) 00:17:51.462 fused_ordering(8) 00:17:51.462 fused_ordering(9) 00:17:51.462 fused_ordering(10) 00:17:51.462 fused_ordering(11) 00:17:51.462 fused_ordering(12) 00:17:51.462 fused_ordering(13) 00:17:51.462 fused_ordering(14) 00:17:51.462 fused_ordering(15) 00:17:51.462 fused_ordering(16) 00:17:51.462 fused_ordering(17) 00:17:51.462 fused_ordering(18) 00:17:51.462 fused_ordering(19) 00:17:51.462 fused_ordering(20) 00:17:51.462 fused_ordering(21) 00:17:51.462 fused_ordering(22) 00:17:51.462 fused_ordering(23) 00:17:51.462 fused_ordering(24) 00:17:51.462 fused_ordering(25) 00:17:51.462 fused_ordering(26) 00:17:51.462 fused_ordering(27) 00:17:51.462 fused_ordering(28) 00:17:51.462 fused_ordering(29) 00:17:51.462 fused_ordering(30) 00:17:51.462 fused_ordering(31) 00:17:51.462 fused_ordering(32) 00:17:51.462 fused_ordering(33) 00:17:51.462 fused_ordering(34) 00:17:51.462 fused_ordering(35) 00:17:51.462 fused_ordering(36) 00:17:51.462 fused_ordering(37) 00:17:51.462 fused_ordering(38) 00:17:51.462 fused_ordering(39) 00:17:51.462 fused_ordering(40) 00:17:51.462 fused_ordering(41) 00:17:51.462 fused_ordering(42) 00:17:51.462 fused_ordering(43) 00:17:51.462 fused_ordering(44) 00:17:51.462 fused_ordering(45) 00:17:51.462 fused_ordering(46) 00:17:51.462 fused_ordering(47) 00:17:51.462 fused_ordering(48) 00:17:51.462 fused_ordering(49) 00:17:51.462 fused_ordering(50) 00:17:51.462 fused_ordering(51) 00:17:51.462 fused_ordering(52) 00:17:51.462 fused_ordering(53) 00:17:51.462 fused_ordering(54) 00:17:51.462 fused_ordering(55) 00:17:51.462 fused_ordering(56) 00:17:51.462 fused_ordering(57) 00:17:51.462 fused_ordering(58) 00:17:51.462 fused_ordering(59) 00:17:51.462 fused_ordering(60) 00:17:51.462 fused_ordering(61) 00:17:51.462 fused_ordering(62) 00:17:51.462 fused_ordering(63) 00:17:51.462 fused_ordering(64) 00:17:51.462 fused_ordering(65) 00:17:51.462 fused_ordering(66) 00:17:51.462 fused_ordering(67) 00:17:51.462 fused_ordering(68) 00:17:51.462 fused_ordering(69) 00:17:51.462 fused_ordering(70) 00:17:51.462 fused_ordering(71) 00:17:51.462 fused_ordering(72) 00:17:51.462 fused_ordering(73) 00:17:51.462 fused_ordering(74) 00:17:51.462 fused_ordering(75) 00:17:51.462 fused_ordering(76) 00:17:51.462 fused_ordering(77) 00:17:51.462 fused_ordering(78) 00:17:51.462 fused_ordering(79) 00:17:51.462 fused_ordering(80) 00:17:51.462 fused_ordering(81) 00:17:51.462 fused_ordering(82) 00:17:51.462 fused_ordering(83) 00:17:51.462 fused_ordering(84) 00:17:51.462 fused_ordering(85) 00:17:51.462 fused_ordering(86) 00:17:51.462 fused_ordering(87) 00:17:51.462 fused_ordering(88) 00:17:51.462 fused_ordering(89) 00:17:51.462 fused_ordering(90) 00:17:51.462 fused_ordering(91) 00:17:51.462 fused_ordering(92) 00:17:51.462 fused_ordering(93) 00:17:51.462 fused_ordering(94) 00:17:51.462 fused_ordering(95) 00:17:51.462 fused_ordering(96) 00:17:51.463 fused_ordering(97) 00:17:51.463 fused_ordering(98) 00:17:51.463 fused_ordering(99) 00:17:51.463 fused_ordering(100) 00:17:51.463 fused_ordering(101) 00:17:51.463 fused_ordering(102) 00:17:51.463 fused_ordering(103) 00:17:51.463 fused_ordering(104) 00:17:51.463 fused_ordering(105) 00:17:51.463 fused_ordering(106) 00:17:51.463 fused_ordering(107) 00:17:51.463 fused_ordering(108) 00:17:51.463 fused_ordering(109) 00:17:51.463 fused_ordering(110) 00:17:51.463 fused_ordering(111) 00:17:51.463 fused_ordering(112) 00:17:51.463 fused_ordering(113) 00:17:51.463 fused_ordering(114) 00:17:51.463 fused_ordering(115) 00:17:51.463 fused_ordering(116) 00:17:51.463 fused_ordering(117) 00:17:51.463 fused_ordering(118) 00:17:51.463 fused_ordering(119) 00:17:51.463 fused_ordering(120) 00:17:51.463 fused_ordering(121) 00:17:51.463 fused_ordering(122) 00:17:51.463 fused_ordering(123) 00:17:51.463 fused_ordering(124) 00:17:51.463 fused_ordering(125) 00:17:51.463 fused_ordering(126) 00:17:51.463 fused_ordering(127) 00:17:51.463 fused_ordering(128) 00:17:51.463 fused_ordering(129) 00:17:51.463 fused_ordering(130) 00:17:51.463 fused_ordering(131) 00:17:51.463 fused_ordering(132) 00:17:51.463 fused_ordering(133) 00:17:51.463 fused_ordering(134) 00:17:51.463 fused_ordering(135) 00:17:51.463 fused_ordering(136) 00:17:51.463 fused_ordering(137) 00:17:51.463 fused_ordering(138) 00:17:51.463 fused_ordering(139) 00:17:51.463 fused_ordering(140) 00:17:51.463 fused_ordering(141) 00:17:51.463 fused_ordering(142) 00:17:51.463 fused_ordering(143) 00:17:51.463 fused_ordering(144) 00:17:51.463 fused_ordering(145) 00:17:51.463 fused_ordering(146) 00:17:51.463 fused_ordering(147) 00:17:51.463 fused_ordering(148) 00:17:51.463 fused_ordering(149) 00:17:51.463 fused_ordering(150) 00:17:51.463 fused_ordering(151) 00:17:51.463 fused_ordering(152) 00:17:51.463 fused_ordering(153) 00:17:51.463 fused_ordering(154) 00:17:51.463 fused_ordering(155) 00:17:51.463 fused_ordering(156) 00:17:51.463 fused_ordering(157) 00:17:51.463 fused_ordering(158) 00:17:51.463 fused_ordering(159) 00:17:51.463 fused_ordering(160) 00:17:51.463 fused_ordering(161) 00:17:51.463 fused_ordering(162) 00:17:51.463 fused_ordering(163) 00:17:51.463 fused_ordering(164) 00:17:51.463 fused_ordering(165) 00:17:51.463 fused_ordering(166) 00:17:51.463 fused_ordering(167) 00:17:51.463 fused_ordering(168) 00:17:51.463 fused_ordering(169) 00:17:51.463 fused_ordering(170) 00:17:51.463 fused_ordering(171) 00:17:51.463 fused_ordering(172) 00:17:51.463 fused_ordering(173) 00:17:51.463 fused_ordering(174) 00:17:51.463 fused_ordering(175) 00:17:51.463 fused_ordering(176) 00:17:51.463 fused_ordering(177) 00:17:51.463 fused_ordering(178) 00:17:51.463 fused_ordering(179) 00:17:51.463 fused_ordering(180) 00:17:51.463 fused_ordering(181) 00:17:51.463 fused_ordering(182) 00:17:51.463 fused_ordering(183) 00:17:51.463 fused_ordering(184) 00:17:51.463 fused_ordering(185) 00:17:51.463 fused_ordering(186) 00:17:51.463 fused_ordering(187) 00:17:51.463 fused_ordering(188) 00:17:51.463 fused_ordering(189) 00:17:51.463 fused_ordering(190) 00:17:51.463 fused_ordering(191) 00:17:51.463 fused_ordering(192) 00:17:51.463 fused_ordering(193) 00:17:51.463 fused_ordering(194) 00:17:51.463 fused_ordering(195) 00:17:51.463 fused_ordering(196) 00:17:51.463 fused_ordering(197) 00:17:51.463 fused_ordering(198) 00:17:51.463 fused_ordering(199) 00:17:51.463 fused_ordering(200) 00:17:51.463 fused_ordering(201) 00:17:51.463 fused_ordering(202) 00:17:51.463 fused_ordering(203) 00:17:51.463 fused_ordering(204) 00:17:51.463 fused_ordering(205) 00:17:51.722 fused_ordering(206) 00:17:51.722 fused_ordering(207) 00:17:51.722 fused_ordering(208) 00:17:51.722 fused_ordering(209) 00:17:51.722 fused_ordering(210) 00:17:51.722 fused_ordering(211) 00:17:51.722 fused_ordering(212) 00:17:51.722 fused_ordering(213) 00:17:51.722 fused_ordering(214) 00:17:51.722 fused_ordering(215) 00:17:51.722 fused_ordering(216) 00:17:51.722 fused_ordering(217) 00:17:51.722 fused_ordering(218) 00:17:51.722 fused_ordering(219) 00:17:51.722 fused_ordering(220) 00:17:51.722 fused_ordering(221) 00:17:51.722 fused_ordering(222) 00:17:51.722 fused_ordering(223) 00:17:51.722 fused_ordering(224) 00:17:51.722 fused_ordering(225) 00:17:51.722 fused_ordering(226) 00:17:51.722 fused_ordering(227) 00:17:51.722 fused_ordering(228) 00:17:51.722 fused_ordering(229) 00:17:51.722 fused_ordering(230) 00:17:51.722 fused_ordering(231) 00:17:51.722 fused_ordering(232) 00:17:51.722 fused_ordering(233) 00:17:51.722 fused_ordering(234) 00:17:51.722 fused_ordering(235) 00:17:51.722 fused_ordering(236) 00:17:51.722 fused_ordering(237) 00:17:51.722 fused_ordering(238) 00:17:51.722 fused_ordering(239) 00:17:51.722 fused_ordering(240) 00:17:51.722 fused_ordering(241) 00:17:51.722 fused_ordering(242) 00:17:51.722 fused_ordering(243) 00:17:51.722 fused_ordering(244) 00:17:51.722 fused_ordering(245) 00:17:51.722 fused_ordering(246) 00:17:51.722 fused_ordering(247) 00:17:51.723 fused_ordering(248) 00:17:51.723 fused_ordering(249) 00:17:51.723 fused_ordering(250) 00:17:51.723 fused_ordering(251) 00:17:51.723 fused_ordering(252) 00:17:51.723 fused_ordering(253) 00:17:51.723 fused_ordering(254) 00:17:51.723 fused_ordering(255) 00:17:51.723 fused_ordering(256) 00:17:51.723 fused_ordering(257) 00:17:51.723 fused_ordering(258) 00:17:51.723 fused_ordering(259) 00:17:51.723 fused_ordering(260) 00:17:51.723 fused_ordering(261) 00:17:51.723 fused_ordering(262) 00:17:51.723 fused_ordering(263) 00:17:51.723 fused_ordering(264) 00:17:51.723 fused_ordering(265) 00:17:51.723 fused_ordering(266) 00:17:51.723 fused_ordering(267) 00:17:51.723 fused_ordering(268) 00:17:51.723 fused_ordering(269) 00:17:51.723 fused_ordering(270) 00:17:51.723 fused_ordering(271) 00:17:51.723 fused_ordering(272) 00:17:51.723 fused_ordering(273) 00:17:51.723 fused_ordering(274) 00:17:51.723 fused_ordering(275) 00:17:51.723 fused_ordering(276) 00:17:51.723 fused_ordering(277) 00:17:51.723 fused_ordering(278) 00:17:51.723 fused_ordering(279) 00:17:51.723 fused_ordering(280) 00:17:51.723 fused_ordering(281) 00:17:51.723 fused_ordering(282) 00:17:51.723 fused_ordering(283) 00:17:51.723 fused_ordering(284) 00:17:51.723 fused_ordering(285) 00:17:51.723 fused_ordering(286) 00:17:51.723 fused_ordering(287) 00:17:51.723 fused_ordering(288) 00:17:51.723 fused_ordering(289) 00:17:51.723 fused_ordering(290) 00:17:51.723 fused_ordering(291) 00:17:51.723 fused_ordering(292) 00:17:51.723 fused_ordering(293) 00:17:51.723 fused_ordering(294) 00:17:51.723 fused_ordering(295) 00:17:51.723 fused_ordering(296) 00:17:51.723 fused_ordering(297) 00:17:51.723 fused_ordering(298) 00:17:51.723 fused_ordering(299) 00:17:51.723 fused_ordering(300) 00:17:51.723 fused_ordering(301) 00:17:51.723 fused_ordering(302) 00:17:51.723 fused_ordering(303) 00:17:51.723 fused_ordering(304) 00:17:51.723 fused_ordering(305) 00:17:51.723 fused_ordering(306) 00:17:51.723 fused_ordering(307) 00:17:51.723 fused_ordering(308) 00:17:51.723 fused_ordering(309) 00:17:51.723 fused_ordering(310) 00:17:51.723 fused_ordering(311) 00:17:51.723 fused_ordering(312) 00:17:51.723 fused_ordering(313) 00:17:51.723 fused_ordering(314) 00:17:51.723 fused_ordering(315) 00:17:51.723 fused_ordering(316) 00:17:51.723 fused_ordering(317) 00:17:51.723 fused_ordering(318) 00:17:51.723 fused_ordering(319) 00:17:51.723 fused_ordering(320) 00:17:51.723 fused_ordering(321) 00:17:51.723 fused_ordering(322) 00:17:51.723 fused_ordering(323) 00:17:51.723 fused_ordering(324) 00:17:51.723 fused_ordering(325) 00:17:51.723 fused_ordering(326) 00:17:51.723 fused_ordering(327) 00:17:51.723 fused_ordering(328) 00:17:51.723 fused_ordering(329) 00:17:51.723 fused_ordering(330) 00:17:51.723 fused_ordering(331) 00:17:51.723 fused_ordering(332) 00:17:51.723 fused_ordering(333) 00:17:51.723 fused_ordering(334) 00:17:51.723 fused_ordering(335) 00:17:51.723 fused_ordering(336) 00:17:51.723 fused_ordering(337) 00:17:51.723 fused_ordering(338) 00:17:51.723 fused_ordering(339) 00:17:51.723 fused_ordering(340) 00:17:51.723 fused_ordering(341) 00:17:51.723 fused_ordering(342) 00:17:51.723 fused_ordering(343) 00:17:51.723 fused_ordering(344) 00:17:51.723 fused_ordering(345) 00:17:51.723 fused_ordering(346) 00:17:51.723 fused_ordering(347) 00:17:51.723 fused_ordering(348) 00:17:51.723 fused_ordering(349) 00:17:51.723 fused_ordering(350) 00:17:51.723 fused_ordering(351) 00:17:51.723 fused_ordering(352) 00:17:51.723 fused_ordering(353) 00:17:51.723 fused_ordering(354) 00:17:51.723 fused_ordering(355) 00:17:51.723 fused_ordering(356) 00:17:51.723 fused_ordering(357) 00:17:51.723 fused_ordering(358) 00:17:51.723 fused_ordering(359) 00:17:51.723 fused_ordering(360) 00:17:51.723 fused_ordering(361) 00:17:51.723 fused_ordering(362) 00:17:51.723 fused_ordering(363) 00:17:51.723 fused_ordering(364) 00:17:51.723 fused_ordering(365) 00:17:51.723 fused_ordering(366) 00:17:51.723 fused_ordering(367) 00:17:51.723 fused_ordering(368) 00:17:51.723 fused_ordering(369) 00:17:51.723 fused_ordering(370) 00:17:51.723 fused_ordering(371) 00:17:51.723 fused_ordering(372) 00:17:51.723 fused_ordering(373) 00:17:51.723 fused_ordering(374) 00:17:51.723 fused_ordering(375) 00:17:51.723 fused_ordering(376) 00:17:51.723 fused_ordering(377) 00:17:51.723 fused_ordering(378) 00:17:51.723 fused_ordering(379) 00:17:51.723 fused_ordering(380) 00:17:51.723 fused_ordering(381) 00:17:51.723 fused_ordering(382) 00:17:51.723 fused_ordering(383) 00:17:51.723 fused_ordering(384) 00:17:51.723 fused_ordering(385) 00:17:51.723 fused_ordering(386) 00:17:51.723 fused_ordering(387) 00:17:51.723 fused_ordering(388) 00:17:51.723 fused_ordering(389) 00:17:51.723 fused_ordering(390) 00:17:51.723 fused_ordering(391) 00:17:51.723 fused_ordering(392) 00:17:51.723 fused_ordering(393) 00:17:51.723 fused_ordering(394) 00:17:51.723 fused_ordering(395) 00:17:51.723 fused_ordering(396) 00:17:51.723 fused_ordering(397) 00:17:51.723 fused_ordering(398) 00:17:51.723 fused_ordering(399) 00:17:51.723 fused_ordering(400) 00:17:51.723 fused_ordering(401) 00:17:51.723 fused_ordering(402) 00:17:51.723 fused_ordering(403) 00:17:51.723 fused_ordering(404) 00:17:51.723 fused_ordering(405) 00:17:51.723 fused_ordering(406) 00:17:51.723 fused_ordering(407) 00:17:51.723 fused_ordering(408) 00:17:51.723 fused_ordering(409) 00:17:51.723 fused_ordering(410) 00:17:51.982 fused_ordering(411) 00:17:51.982 fused_ordering(412) 00:17:51.982 fused_ordering(413) 00:17:51.982 fused_ordering(414) 00:17:51.982 fused_ordering(415) 00:17:51.982 fused_ordering(416) 00:17:51.982 fused_ordering(417) 00:17:51.982 fused_ordering(418) 00:17:51.982 fused_ordering(419) 00:17:51.982 fused_ordering(420) 00:17:51.982 fused_ordering(421) 00:17:51.982 fused_ordering(422) 00:17:51.982 fused_ordering(423) 00:17:51.982 fused_ordering(424) 00:17:51.982 fused_ordering(425) 00:17:51.982 fused_ordering(426) 00:17:51.982 fused_ordering(427) 00:17:51.982 fused_ordering(428) 00:17:51.982 fused_ordering(429) 00:17:51.982 fused_ordering(430) 00:17:51.982 fused_ordering(431) 00:17:51.982 fused_ordering(432) 00:17:51.982 fused_ordering(433) 00:17:51.982 fused_ordering(434) 00:17:51.982 fused_ordering(435) 00:17:51.982 fused_ordering(436) 00:17:51.982 fused_ordering(437) 00:17:51.982 fused_ordering(438) 00:17:51.982 fused_ordering(439) 00:17:51.982 fused_ordering(440) 00:17:51.982 fused_ordering(441) 00:17:51.982 fused_ordering(442) 00:17:51.982 fused_ordering(443) 00:17:51.982 fused_ordering(444) 00:17:51.982 fused_ordering(445) 00:17:51.982 fused_ordering(446) 00:17:51.982 fused_ordering(447) 00:17:51.982 fused_ordering(448) 00:17:51.982 fused_ordering(449) 00:17:51.982 fused_ordering(450) 00:17:51.982 fused_ordering(451) 00:17:51.982 fused_ordering(452) 00:17:51.982 fused_ordering(453) 00:17:51.982 fused_ordering(454) 00:17:51.982 fused_ordering(455) 00:17:51.982 fused_ordering(456) 00:17:51.982 fused_ordering(457) 00:17:51.982 fused_ordering(458) 00:17:51.982 fused_ordering(459) 00:17:51.982 fused_ordering(460) 00:17:51.982 fused_ordering(461) 00:17:51.983 fused_ordering(462) 00:17:51.983 fused_ordering(463) 00:17:51.983 fused_ordering(464) 00:17:51.983 fused_ordering(465) 00:17:51.983 fused_ordering(466) 00:17:51.983 fused_ordering(467) 00:17:51.983 fused_ordering(468) 00:17:51.983 fused_ordering(469) 00:17:51.983 fused_ordering(470) 00:17:51.983 fused_ordering(471) 00:17:51.983 fused_ordering(472) 00:17:51.983 fused_ordering(473) 00:17:51.983 fused_ordering(474) 00:17:51.983 fused_ordering(475) 00:17:51.983 fused_ordering(476) 00:17:51.983 fused_ordering(477) 00:17:51.983 fused_ordering(478) 00:17:51.983 fused_ordering(479) 00:17:51.983 fused_ordering(480) 00:17:51.983 fused_ordering(481) 00:17:51.983 fused_ordering(482) 00:17:51.983 fused_ordering(483) 00:17:51.983 fused_ordering(484) 00:17:51.983 fused_ordering(485) 00:17:51.983 fused_ordering(486) 00:17:51.983 fused_ordering(487) 00:17:51.983 fused_ordering(488) 00:17:51.983 fused_ordering(489) 00:17:51.983 fused_ordering(490) 00:17:51.983 fused_ordering(491) 00:17:51.983 fused_ordering(492) 00:17:51.983 fused_ordering(493) 00:17:51.983 fused_ordering(494) 00:17:51.983 fused_ordering(495) 00:17:51.983 fused_ordering(496) 00:17:51.983 fused_ordering(497) 00:17:51.983 fused_ordering(498) 00:17:51.983 fused_ordering(499) 00:17:51.983 fused_ordering(500) 00:17:51.983 fused_ordering(501) 00:17:51.983 fused_ordering(502) 00:17:51.983 fused_ordering(503) 00:17:51.983 fused_ordering(504) 00:17:51.983 fused_ordering(505) 00:17:51.983 fused_ordering(506) 00:17:51.983 fused_ordering(507) 00:17:51.983 fused_ordering(508) 00:17:51.983 fused_ordering(509) 00:17:51.983 fused_ordering(510) 00:17:51.983 fused_ordering(511) 00:17:51.983 fused_ordering(512) 00:17:51.983 fused_ordering(513) 00:17:51.983 fused_ordering(514) 00:17:51.983 fused_ordering(515) 00:17:51.983 fused_ordering(516) 00:17:51.983 fused_ordering(517) 00:17:51.983 fused_ordering(518) 00:17:51.983 fused_ordering(519) 00:17:51.983 fused_ordering(520) 00:17:51.983 fused_ordering(521) 00:17:51.983 fused_ordering(522) 00:17:51.983 fused_ordering(523) 00:17:51.983 fused_ordering(524) 00:17:51.983 fused_ordering(525) 00:17:51.983 fused_ordering(526) 00:17:51.983 fused_ordering(527) 00:17:51.983 fused_ordering(528) 00:17:51.983 fused_ordering(529) 00:17:51.983 fused_ordering(530) 00:17:51.983 fused_ordering(531) 00:17:51.983 fused_ordering(532) 00:17:51.983 fused_ordering(533) 00:17:51.983 fused_ordering(534) 00:17:51.983 fused_ordering(535) 00:17:51.983 fused_ordering(536) 00:17:51.983 fused_ordering(537) 00:17:51.983 fused_ordering(538) 00:17:51.983 fused_ordering(539) 00:17:51.983 fused_ordering(540) 00:17:51.983 fused_ordering(541) 00:17:51.983 fused_ordering(542) 00:17:51.983 fused_ordering(543) 00:17:51.983 fused_ordering(544) 00:17:51.983 fused_ordering(545) 00:17:51.983 fused_ordering(546) 00:17:51.983 fused_ordering(547) 00:17:51.983 fused_ordering(548) 00:17:51.983 fused_ordering(549) 00:17:51.983 fused_ordering(550) 00:17:51.983 fused_ordering(551) 00:17:51.983 fused_ordering(552) 00:17:51.983 fused_ordering(553) 00:17:51.983 fused_ordering(554) 00:17:51.983 fused_ordering(555) 00:17:51.983 fused_ordering(556) 00:17:51.983 fused_ordering(557) 00:17:51.983 fused_ordering(558) 00:17:51.983 fused_ordering(559) 00:17:51.983 fused_ordering(560) 00:17:51.983 fused_ordering(561) 00:17:51.983 fused_ordering(562) 00:17:51.983 fused_ordering(563) 00:17:51.983 fused_ordering(564) 00:17:51.983 fused_ordering(565) 00:17:51.983 fused_ordering(566) 00:17:51.983 fused_ordering(567) 00:17:51.983 fused_ordering(568) 00:17:51.983 fused_ordering(569) 00:17:51.983 fused_ordering(570) 00:17:51.983 fused_ordering(571) 00:17:51.983 fused_ordering(572) 00:17:51.983 fused_ordering(573) 00:17:51.983 fused_ordering(574) 00:17:51.983 fused_ordering(575) 00:17:51.983 fused_ordering(576) 00:17:51.983 fused_ordering(577) 00:17:51.983 fused_ordering(578) 00:17:51.983 fused_ordering(579) 00:17:51.983 fused_ordering(580) 00:17:51.983 fused_ordering(581) 00:17:51.983 fused_ordering(582) 00:17:51.983 fused_ordering(583) 00:17:51.983 fused_ordering(584) 00:17:51.983 fused_ordering(585) 00:17:51.983 fused_ordering(586) 00:17:51.983 fused_ordering(587) 00:17:51.983 fused_ordering(588) 00:17:51.983 fused_ordering(589) 00:17:51.983 fused_ordering(590) 00:17:51.983 fused_ordering(591) 00:17:51.983 fused_ordering(592) 00:17:51.983 fused_ordering(593) 00:17:51.983 fused_ordering(594) 00:17:51.983 fused_ordering(595) 00:17:51.983 fused_ordering(596) 00:17:51.983 fused_ordering(597) 00:17:51.983 fused_ordering(598) 00:17:51.983 fused_ordering(599) 00:17:51.983 fused_ordering(600) 00:17:51.983 fused_ordering(601) 00:17:51.983 fused_ordering(602) 00:17:51.983 fused_ordering(603) 00:17:51.983 fused_ordering(604) 00:17:51.983 fused_ordering(605) 00:17:51.983 fused_ordering(606) 00:17:51.983 fused_ordering(607) 00:17:51.983 fused_ordering(608) 00:17:51.983 fused_ordering(609) 00:17:51.983 fused_ordering(610) 00:17:51.983 fused_ordering(611) 00:17:51.983 fused_ordering(612) 00:17:51.983 fused_ordering(613) 00:17:51.983 fused_ordering(614) 00:17:51.983 fused_ordering(615) 00:17:52.242 fused_ordering(616) 00:17:52.242 fused_ordering(617) 00:17:52.242 fused_ordering(618) 00:17:52.242 fused_ordering(619) 00:17:52.242 fused_ordering(620) 00:17:52.242 fused_ordering(621) 00:17:52.242 fused_ordering(622) 00:17:52.242 fused_ordering(623) 00:17:52.242 fused_ordering(624) 00:17:52.242 fused_ordering(625) 00:17:52.242 fused_ordering(626) 00:17:52.242 fused_ordering(627) 00:17:52.242 fused_ordering(628) 00:17:52.242 fused_ordering(629) 00:17:52.242 fused_ordering(630) 00:17:52.242 fused_ordering(631) 00:17:52.242 fused_ordering(632) 00:17:52.242 fused_ordering(633) 00:17:52.242 fused_ordering(634) 00:17:52.242 fused_ordering(635) 00:17:52.242 fused_ordering(636) 00:17:52.242 fused_ordering(637) 00:17:52.242 fused_ordering(638) 00:17:52.242 fused_ordering(639) 00:17:52.242 fused_ordering(640) 00:17:52.242 fused_ordering(641) 00:17:52.242 fused_ordering(642) 00:17:52.242 fused_ordering(643) 00:17:52.242 fused_ordering(644) 00:17:52.242 fused_ordering(645) 00:17:52.242 fused_ordering(646) 00:17:52.242 fused_ordering(647) 00:17:52.242 fused_ordering(648) 00:17:52.242 fused_ordering(649) 00:17:52.242 fused_ordering(650) 00:17:52.242 fused_ordering(651) 00:17:52.242 fused_ordering(652) 00:17:52.242 fused_ordering(653) 00:17:52.242 fused_ordering(654) 00:17:52.242 fused_ordering(655) 00:17:52.242 fused_ordering(656) 00:17:52.242 fused_ordering(657) 00:17:52.242 fused_ordering(658) 00:17:52.242 fused_ordering(659) 00:17:52.242 fused_ordering(660) 00:17:52.242 fused_ordering(661) 00:17:52.242 fused_ordering(662) 00:17:52.242 fused_ordering(663) 00:17:52.242 fused_ordering(664) 00:17:52.242 fused_ordering(665) 00:17:52.242 fused_ordering(666) 00:17:52.242 fused_ordering(667) 00:17:52.242 fused_ordering(668) 00:17:52.242 fused_ordering(669) 00:17:52.242 fused_ordering(670) 00:17:52.242 fused_ordering(671) 00:17:52.242 fused_ordering(672) 00:17:52.242 fused_ordering(673) 00:17:52.242 fused_ordering(674) 00:17:52.242 fused_ordering(675) 00:17:52.242 fused_ordering(676) 00:17:52.242 fused_ordering(677) 00:17:52.242 fused_ordering(678) 00:17:52.242 fused_ordering(679) 00:17:52.242 fused_ordering(680) 00:17:52.242 fused_ordering(681) 00:17:52.242 fused_ordering(682) 00:17:52.242 fused_ordering(683) 00:17:52.242 fused_ordering(684) 00:17:52.242 fused_ordering(685) 00:17:52.242 fused_ordering(686) 00:17:52.242 fused_ordering(687) 00:17:52.242 fused_ordering(688) 00:17:52.242 fused_ordering(689) 00:17:52.242 fused_ordering(690) 00:17:52.242 fused_ordering(691) 00:17:52.242 fused_ordering(692) 00:17:52.242 fused_ordering(693) 00:17:52.242 fused_ordering(694) 00:17:52.242 fused_ordering(695) 00:17:52.242 fused_ordering(696) 00:17:52.242 fused_ordering(697) 00:17:52.242 fused_ordering(698) 00:17:52.242 fused_ordering(699) 00:17:52.242 fused_ordering(700) 00:17:52.242 fused_ordering(701) 00:17:52.242 fused_ordering(702) 00:17:52.242 fused_ordering(703) 00:17:52.242 fused_ordering(704) 00:17:52.242 fused_ordering(705) 00:17:52.242 fused_ordering(706) 00:17:52.242 fused_ordering(707) 00:17:52.242 fused_ordering(708) 00:17:52.242 fused_ordering(709) 00:17:52.242 fused_ordering(710) 00:17:52.242 fused_ordering(711) 00:17:52.242 fused_ordering(712) 00:17:52.242 fused_ordering(713) 00:17:52.242 fused_ordering(714) 00:17:52.242 fused_ordering(715) 00:17:52.242 fused_ordering(716) 00:17:52.242 fused_ordering(717) 00:17:52.242 fused_ordering(718) 00:17:52.242 fused_ordering(719) 00:17:52.242 fused_ordering(720) 00:17:52.242 fused_ordering(721) 00:17:52.242 fused_ordering(722) 00:17:52.242 fused_ordering(723) 00:17:52.242 fused_ordering(724) 00:17:52.242 fused_ordering(725) 00:17:52.242 fused_ordering(726) 00:17:52.242 fused_ordering(727) 00:17:52.242 fused_ordering(728) 00:17:52.242 fused_ordering(729) 00:17:52.242 fused_ordering(730) 00:17:52.242 fused_ordering(731) 00:17:52.242 fused_ordering(732) 00:17:52.242 fused_ordering(733) 00:17:52.242 fused_ordering(734) 00:17:52.242 fused_ordering(735) 00:17:52.242 fused_ordering(736) 00:17:52.242 fused_ordering(737) 00:17:52.242 fused_ordering(738) 00:17:52.242 fused_ordering(739) 00:17:52.242 fused_ordering(740) 00:17:52.242 fused_ordering(741) 00:17:52.242 fused_ordering(742) 00:17:52.242 fused_ordering(743) 00:17:52.242 fused_ordering(744) 00:17:52.242 fused_ordering(745) 00:17:52.242 fused_ordering(746) 00:17:52.242 fused_ordering(747) 00:17:52.242 fused_ordering(748) 00:17:52.242 fused_ordering(749) 00:17:52.242 fused_ordering(750) 00:17:52.242 fused_ordering(751) 00:17:52.242 fused_ordering(752) 00:17:52.242 fused_ordering(753) 00:17:52.242 fused_ordering(754) 00:17:52.242 fused_ordering(755) 00:17:52.242 fused_ordering(756) 00:17:52.242 fused_ordering(757) 00:17:52.242 fused_ordering(758) 00:17:52.242 fused_ordering(759) 00:17:52.242 fused_ordering(760) 00:17:52.242 fused_ordering(761) 00:17:52.242 fused_ordering(762) 00:17:52.242 fused_ordering(763) 00:17:52.242 fused_ordering(764) 00:17:52.242 fused_ordering(765) 00:17:52.242 fused_ordering(766) 00:17:52.242 fused_ordering(767) 00:17:52.242 fused_ordering(768) 00:17:52.242 fused_ordering(769) 00:17:52.242 fused_ordering(770) 00:17:52.242 fused_ordering(771) 00:17:52.242 fused_ordering(772) 00:17:52.242 fused_ordering(773) 00:17:52.242 fused_ordering(774) 00:17:52.242 fused_ordering(775) 00:17:52.242 fused_ordering(776) 00:17:52.242 fused_ordering(777) 00:17:52.242 fused_ordering(778) 00:17:52.242 fused_ordering(779) 00:17:52.242 fused_ordering(780) 00:17:52.242 fused_ordering(781) 00:17:52.242 fused_ordering(782) 00:17:52.242 fused_ordering(783) 00:17:52.242 fused_ordering(784) 00:17:52.242 fused_ordering(785) 00:17:52.242 fused_ordering(786) 00:17:52.242 fused_ordering(787) 00:17:52.242 fused_ordering(788) 00:17:52.242 fused_ordering(789) 00:17:52.242 fused_ordering(790) 00:17:52.242 fused_ordering(791) 00:17:52.242 fused_ordering(792) 00:17:52.242 fused_ordering(793) 00:17:52.242 fused_ordering(794) 00:17:52.242 fused_ordering(795) 00:17:52.242 fused_ordering(796) 00:17:52.242 fused_ordering(797) 00:17:52.242 fused_ordering(798) 00:17:52.242 fused_ordering(799) 00:17:52.242 fused_ordering(800) 00:17:52.242 fused_ordering(801) 00:17:52.242 fused_ordering(802) 00:17:52.242 fused_ordering(803) 00:17:52.242 fused_ordering(804) 00:17:52.242 fused_ordering(805) 00:17:52.242 fused_ordering(806) 00:17:52.242 fused_ordering(807) 00:17:52.242 fused_ordering(808) 00:17:52.242 fused_ordering(809) 00:17:52.243 fused_ordering(810) 00:17:52.243 fused_ordering(811) 00:17:52.243 fused_ordering(812) 00:17:52.243 fused_ordering(813) 00:17:52.243 fused_ordering(814) 00:17:52.243 fused_ordering(815) 00:17:52.243 fused_ordering(816) 00:17:52.243 fused_ordering(817) 00:17:52.243 fused_ordering(818) 00:17:52.243 fused_ordering(819) 00:17:52.243 fused_ordering(820) 00:17:52.502 fused_ordering(821) 00:17:52.502 fused_ordering(822) 00:17:52.502 fused_ordering(823) 00:17:52.502 fused_ordering(824) 00:17:52.502 fused_ordering(825) 00:17:52.502 fused_ordering(826) 00:17:52.502 fused_ordering(827) 00:17:52.502 fused_ordering(828) 00:17:52.502 fused_ordering(829) 00:17:52.502 fused_ordering(830) 00:17:52.502 fused_ordering(831) 00:17:52.502 fused_ordering(832) 00:17:52.502 fused_ordering(833) 00:17:52.502 fused_ordering(834) 00:17:52.502 fused_ordering(835) 00:17:52.502 fused_ordering(836) 00:17:52.502 fused_ordering(837) 00:17:52.502 fused_ordering(838) 00:17:52.502 fused_ordering(839) 00:17:52.502 fused_ordering(840) 00:17:52.502 fused_ordering(841) 00:17:52.502 fused_ordering(842) 00:17:52.502 fused_ordering(843) 00:17:52.502 fused_ordering(844) 00:17:52.502 fused_ordering(845) 00:17:52.502 fused_ordering(846) 00:17:52.502 fused_ordering(847) 00:17:52.502 fused_ordering(848) 00:17:52.502 fused_ordering(849) 00:17:52.502 fused_ordering(850) 00:17:52.502 fused_ordering(851) 00:17:52.502 fused_ordering(852) 00:17:52.502 fused_ordering(853) 00:17:52.502 fused_ordering(854) 00:17:52.502 fused_ordering(855) 00:17:52.502 fused_ordering(856) 00:17:52.502 fused_ordering(857) 00:17:52.502 fused_ordering(858) 00:17:52.502 fused_ordering(859) 00:17:52.502 fused_ordering(860) 00:17:52.502 fused_ordering(861) 00:17:52.502 fused_ordering(862) 00:17:52.502 fused_ordering(863) 00:17:52.502 fused_ordering(864) 00:17:52.502 fused_ordering(865) 00:17:52.502 fused_ordering(866) 00:17:52.502 fused_ordering(867) 00:17:52.502 fused_ordering(868) 00:17:52.502 fused_ordering(869) 00:17:52.502 fused_ordering(870) 00:17:52.502 fused_ordering(871) 00:17:52.502 fused_ordering(872) 00:17:52.502 fused_ordering(873) 00:17:52.502 fused_ordering(874) 00:17:52.502 fused_ordering(875) 00:17:52.502 fused_ordering(876) 00:17:52.502 fused_ordering(877) 00:17:52.502 fused_ordering(878) 00:17:52.502 fused_ordering(879) 00:17:52.502 fused_ordering(880) 00:17:52.502 fused_ordering(881) 00:17:52.502 fused_ordering(882) 00:17:52.502 fused_ordering(883) 00:17:52.502 fused_ordering(884) 00:17:52.502 fused_ordering(885) 00:17:52.502 fused_ordering(886) 00:17:52.502 fused_ordering(887) 00:17:52.502 fused_ordering(888) 00:17:52.502 fused_ordering(889) 00:17:52.502 fused_ordering(890) 00:17:52.502 fused_ordering(891) 00:17:52.502 fused_ordering(892) 00:17:52.502 fused_ordering(893) 00:17:52.502 fused_ordering(894) 00:17:52.502 fused_ordering(895) 00:17:52.502 fused_ordering(896) 00:17:52.502 fused_ordering(897) 00:17:52.502 fused_ordering(898) 00:17:52.502 fused_ordering(899) 00:17:52.502 fused_ordering(900) 00:17:52.502 fused_ordering(901) 00:17:52.502 fused_ordering(902) 00:17:52.502 fused_ordering(903) 00:17:52.502 fused_ordering(904) 00:17:52.502 fused_ordering(905) 00:17:52.502 fused_ordering(906) 00:17:52.502 fused_ordering(907) 00:17:52.502 fused_ordering(908) 00:17:52.502 fused_ordering(909) 00:17:52.502 fused_ordering(910) 00:17:52.502 fused_ordering(911) 00:17:52.502 fused_ordering(912) 00:17:52.502 fused_ordering(913) 00:17:52.502 fused_ordering(914) 00:17:52.502 fused_ordering(915) 00:17:52.502 fused_ordering(916) 00:17:52.502 fused_ordering(917) 00:17:52.502 fused_ordering(918) 00:17:52.502 fused_ordering(919) 00:17:52.502 fused_ordering(920) 00:17:52.502 fused_ordering(921) 00:17:52.502 fused_ordering(922) 00:17:52.502 fused_ordering(923) 00:17:52.502 fused_ordering(924) 00:17:52.502 fused_ordering(925) 00:17:52.502 fused_ordering(926) 00:17:52.502 fused_ordering(927) 00:17:52.502 fused_ordering(928) 00:17:52.502 fused_ordering(929) 00:17:52.502 fused_ordering(930) 00:17:52.502 fused_ordering(931) 00:17:52.502 fused_ordering(932) 00:17:52.502 fused_ordering(933) 00:17:52.502 fused_ordering(934) 00:17:52.502 fused_ordering(935) 00:17:52.502 fused_ordering(936) 00:17:52.502 fused_ordering(937) 00:17:52.502 fused_ordering(938) 00:17:52.502 fused_ordering(939) 00:17:52.502 fused_ordering(940) 00:17:52.502 fused_ordering(941) 00:17:52.502 fused_ordering(942) 00:17:52.502 fused_ordering(943) 00:17:52.502 fused_ordering(944) 00:17:52.502 fused_ordering(945) 00:17:52.502 fused_ordering(946) 00:17:52.502 fused_ordering(947) 00:17:52.502 fused_ordering(948) 00:17:52.502 fused_ordering(949) 00:17:52.502 fused_ordering(950) 00:17:52.502 fused_ordering(951) 00:17:52.502 fused_ordering(952) 00:17:52.502 fused_ordering(953) 00:17:52.502 fused_ordering(954) 00:17:52.502 fused_ordering(955) 00:17:52.502 fused_ordering(956) 00:17:52.502 fused_ordering(957) 00:17:52.502 fused_ordering(958) 00:17:52.502 fused_ordering(959) 00:17:52.502 fused_ordering(960) 00:17:52.502 fused_ordering(961) 00:17:52.502 fused_ordering(962) 00:17:52.502 fused_ordering(963) 00:17:52.502 fused_ordering(964) 00:17:52.502 fused_ordering(965) 00:17:52.502 fused_ordering(966) 00:17:52.502 fused_ordering(967) 00:17:52.502 fused_ordering(968) 00:17:52.502 fused_ordering(969) 00:17:52.502 fused_ordering(970) 00:17:52.502 fused_ordering(971) 00:17:52.502 fused_ordering(972) 00:17:52.502 fused_ordering(973) 00:17:52.502 fused_ordering(974) 00:17:52.502 fused_ordering(975) 00:17:52.502 fused_ordering(976) 00:17:52.502 fused_ordering(977) 00:17:52.502 fused_ordering(978) 00:17:52.502 fused_ordering(979) 00:17:52.502 fused_ordering(980) 00:17:52.502 fused_ordering(981) 00:17:52.502 fused_ordering(982) 00:17:52.502 fused_ordering(983) 00:17:52.502 fused_ordering(984) 00:17:52.502 fused_ordering(985) 00:17:52.502 fused_ordering(986) 00:17:52.502 fused_ordering(987) 00:17:52.502 fused_ordering(988) 00:17:52.502 fused_ordering(989) 00:17:52.502 fused_ordering(990) 00:17:52.502 fused_ordering(991) 00:17:52.502 fused_ordering(992) 00:17:52.502 fused_ordering(993) 00:17:52.502 fused_ordering(994) 00:17:52.502 fused_ordering(995) 00:17:52.502 fused_ordering(996) 00:17:52.502 fused_ordering(997) 00:17:52.502 fused_ordering(998) 00:17:52.502 fused_ordering(999) 00:17:52.502 fused_ordering(1000) 00:17:52.502 fused_ordering(1001) 00:17:52.502 fused_ordering(1002) 00:17:52.502 fused_ordering(1003) 00:17:52.502 fused_ordering(1004) 00:17:52.502 fused_ordering(1005) 00:17:52.502 fused_ordering(1006) 00:17:52.502 fused_ordering(1007) 00:17:52.502 fused_ordering(1008) 00:17:52.502 fused_ordering(1009) 00:17:52.502 fused_ordering(1010) 00:17:52.502 fused_ordering(1011) 00:17:52.502 fused_ordering(1012) 00:17:52.502 fused_ordering(1013) 00:17:52.502 fused_ordering(1014) 00:17:52.502 fused_ordering(1015) 00:17:52.502 fused_ordering(1016) 00:17:52.502 fused_ordering(1017) 00:17:52.502 fused_ordering(1018) 00:17:52.502 fused_ordering(1019) 00:17:52.502 fused_ordering(1020) 00:17:52.502 fused_ordering(1021) 00:17:52.502 fused_ordering(1022) 00:17:52.502 fused_ordering(1023) 00:17:52.502 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:52.502 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:52.502 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:52.502 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:52.502 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:52.502 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:52.502 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:52.503 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:52.762 rmmod nvme_tcp 00:17:52.762 rmmod nvme_fabrics 00:17:52.762 rmmod nvme_keyring 00:17:52.762 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:52.762 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:52.762 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:52.762 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 285032 ']' 00:17:52.762 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 285032 00:17:52.762 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 285032 ']' 00:17:52.762 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 285032 00:17:52.762 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:52.762 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.762 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 285032 00:17:52.762 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:52.762 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:52.762 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 285032' 00:17:52.762 killing process with pid 285032 00:17:52.762 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 285032 00:17:52.762 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 285032 00:17:53.021 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:53.021 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:53.021 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:53.021 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:53.021 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:53.021 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:53.021 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:53.021 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:53.021 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:53.021 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.021 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:53.021 02:59:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.927 02:59:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:54.927 00:17:54.927 real 0m10.501s 00:17:54.927 user 0m5.179s 00:17:54.927 sys 0m5.391s 00:17:54.927 02:59:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.927 02:59:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:54.927 ************************************ 00:17:54.927 END TEST nvmf_fused_ordering 00:17:54.927 ************************************ 00:17:54.927 02:59:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:54.927 02:59:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:54.927 02:59:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.927 02:59:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:54.927 ************************************ 00:17:54.927 START TEST nvmf_ns_masking 00:17:54.927 ************************************ 00:17:54.927 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:55.187 * Looking for test storage... 00:17:55.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:55.187 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:55.187 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:17:55.187 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:55.187 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:55.187 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:55.187 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:55.187 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:55.187 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:55.187 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:55.187 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:55.187 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:55.187 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:55.187 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:55.187 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:55.187 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:55.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.188 --rc genhtml_branch_coverage=1 00:17:55.188 --rc genhtml_function_coverage=1 00:17:55.188 --rc genhtml_legend=1 00:17:55.188 --rc geninfo_all_blocks=1 00:17:55.188 --rc geninfo_unexecuted_blocks=1 00:17:55.188 00:17:55.188 ' 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:55.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.188 --rc genhtml_branch_coverage=1 00:17:55.188 --rc genhtml_function_coverage=1 00:17:55.188 --rc genhtml_legend=1 00:17:55.188 --rc geninfo_all_blocks=1 00:17:55.188 --rc geninfo_unexecuted_blocks=1 00:17:55.188 00:17:55.188 ' 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:55.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.188 --rc genhtml_branch_coverage=1 00:17:55.188 --rc genhtml_function_coverage=1 00:17:55.188 --rc genhtml_legend=1 00:17:55.188 --rc geninfo_all_blocks=1 00:17:55.188 --rc geninfo_unexecuted_blocks=1 00:17:55.188 00:17:55.188 ' 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:55.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.188 --rc genhtml_branch_coverage=1 00:17:55.188 --rc genhtml_function_coverage=1 00:17:55.188 --rc genhtml_legend=1 00:17:55.188 --rc geninfo_all_blocks=1 00:17:55.188 --rc geninfo_unexecuted_blocks=1 00:17:55.188 00:17:55.188 ' 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:55.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=ceab4ce8-8bea-447d-bf9e-f2e03a79482b 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a12d7692-1fdc-4e31-accc-a7bce8055086 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=aa775fb8-87aa-46ce-8aa1-30d04605daed 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.188 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:55.189 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:55.189 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:55.189 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.189 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:55.189 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.189 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:55.189 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:55.189 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:55.189 02:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:01.763 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:01.763 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:01.763 Found net devices under 0000:af:00.0: cvl_0_0 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.763 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:01.764 Found net devices under 0000:af:00.1: cvl_0_1 00:18:01.764 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.764 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:01.764 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:01.764 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:01.764 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:01.764 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:01.764 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:01.764 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:01.764 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:01.764 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:01.764 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:01.764 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:01.764 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:01.764 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:01.764 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:01.764 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:01.764 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:01.764 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:01.764 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:01.764 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:01.764 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:01.764 02:59:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:01.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:18:01.764 00:18:01.764 --- 10.0.0.2 ping statistics --- 00:18:01.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.764 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:01.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:18:01.764 00:18:01.764 --- 10.0.0.1 ping statistics --- 00:18:01.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.764 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=288952 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 288952 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 288952 ']' 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:01.764 [2024-12-14 02:59:16.303804] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:01.764 [2024-12-14 02:59:16.303851] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.764 [2024-12-14 02:59:16.381550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.764 [2024-12-14 02:59:16.402512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.764 [2024-12-14 02:59:16.402547] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.764 [2024-12-14 02:59:16.402554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.764 [2024-12-14 02:59:16.402562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.764 [2024-12-14 02:59:16.402567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.764 [2024-12-14 02:59:16.403031] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:01.764 [2024-12-14 02:59:16.710239] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:01.764 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:02.024 Malloc1 00:18:02.024 02:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:02.024 Malloc2 00:18:02.283 02:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:02.283 02:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:02.542 02:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.801 [2024-12-14 02:59:17.731293] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.801 02:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:02.801 02:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I aa775fb8-87aa-46ce-8aa1-30d04605daed -a 10.0.0.2 -s 4420 -i 4 00:18:03.060 02:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:03.060 02:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:03.060 02:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:03.060 02:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:03.060 02:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:04.966 02:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:04.966 02:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:04.966 02:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:04.966 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:04.966 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:04.966 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:04.966 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:04.966 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:04.966 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:04.966 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:04.966 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:04.966 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:04.966 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:05.225 [ 0]:0x1 00:18:05.225 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:05.225 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:05.225 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=883f605b0a5540f6b799a292bc36a1f7 00:18:05.226 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 883f605b0a5540f6b799a292bc36a1f7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:05.226 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:05.485 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:05.485 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:05.485 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:05.485 [ 0]:0x1 00:18:05.485 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:05.485 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:05.485 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=883f605b0a5540f6b799a292bc36a1f7 00:18:05.485 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 883f605b0a5540f6b799a292bc36a1f7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:05.485 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:05.485 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:05.485 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:05.485 [ 1]:0x2 00:18:05.485 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:05.485 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:05.485 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4a4e5f17fa574a999e9a9a4af151d800 00:18:05.485 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4a4e5f17fa574a999e9a9a4af151d800 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:05.485 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:05.485 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:05.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:05.485 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:05.744 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:06.003 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:06.003 02:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I aa775fb8-87aa-46ce-8aa1-30d04605daed -a 10.0.0.2 -s 4420 -i 4 00:18:06.003 02:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:06.003 02:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:06.003 02:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:06.003 02:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:18:06.003 02:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:18:06.003 02:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:08.540 [ 0]:0x2 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4a4e5f17fa574a999e9a9a4af151d800 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4a4e5f17fa574a999e9a9a4af151d800 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:08.540 [ 0]:0x1 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=883f605b0a5540f6b799a292bc36a1f7 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 883f605b0a5540f6b799a292bc36a1f7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:08.540 [ 1]:0x2 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4a4e5f17fa574a999e9a9a4af151d800 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4a4e5f17fa574a999e9a9a4af151d800 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:08.540 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:08.799 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:08.799 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:08.799 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:08.799 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:08.799 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.800 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:08.800 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.800 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:08.800 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:08.800 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:08.800 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:08.800 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:08.800 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:08.800 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:08.800 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:08.800 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:08.800 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:08.800 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:08.800 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:08.800 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:08.800 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:08.800 [ 0]:0x2 00:18:08.800 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:08.800 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:08.800 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4a4e5f17fa574a999e9a9a4af151d800 00:18:08.800 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4a4e5f17fa574a999e9a9a4af151d800 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:08.800 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:08.800 02:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:09.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:09.059 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:09.319 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:09.319 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I aa775fb8-87aa-46ce-8aa1-30d04605daed -a 10.0.0.2 -s 4420 -i 4 00:18:09.319 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:09.319 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:09.319 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:09.319 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:09.319 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:09.319 02:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:11.856 [ 0]:0x1 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=883f605b0a5540f6b799a292bc36a1f7 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 883f605b0a5540f6b799a292bc36a1f7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:11.856 [ 1]:0x2 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4a4e5f17fa574a999e9a9a4af151d800 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4a4e5f17fa574a999e9a9a4af151d800 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:11.856 [ 0]:0x2 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:11.856 02:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:12.116 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4a4e5f17fa574a999e9a9a4af151d800 00:18:12.116 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4a4e5f17fa574a999e9a9a4af151d800 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:12.116 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:12.116 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:12.116 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:12.117 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:12.117 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.117 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:12.117 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.117 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:12.117 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.117 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:12.117 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:12.117 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:12.117 [2024-12-14 02:59:27.209798] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:12.117 request: 00:18:12.117 { 00:18:12.117 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.117 "nsid": 2, 00:18:12.117 "host": "nqn.2016-06.io.spdk:host1", 00:18:12.117 "method": "nvmf_ns_remove_host", 00:18:12.117 "req_id": 1 00:18:12.117 } 00:18:12.117 Got JSON-RPC error response 00:18:12.117 response: 00:18:12.117 { 00:18:12.117 "code": -32602, 00:18:12.117 "message": "Invalid parameters" 00:18:12.117 } 00:18:12.117 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:12.117 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.117 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.117 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.117 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:12.117 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:12.117 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:12.117 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:12.117 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.117 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:12.117 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.117 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:12.376 [ 0]:0x2 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4a4e5f17fa574a999e9a9a4af151d800 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4a4e5f17fa574a999e9a9a4af151d800 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:12.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=290900 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 290900 /var/tmp/host.sock 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 290900 ']' 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:12.376 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:12.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:12.377 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:12.377 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:12.377 [2024-12-14 02:59:27.443550] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:12.377 [2024-12-14 02:59:27.443595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290900 ] 00:18:12.636 [2024-12-14 02:59:27.516822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.636 [2024-12-14 02:59:27.538716] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.636 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.636 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:12.636 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:12.895 02:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:13.154 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid ceab4ce8-8bea-447d-bf9e-f2e03a79482b 00:18:13.154 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:13.154 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g CEAB4CE88BEA447DBF9EF2E03A79482B -i 00:18:13.413 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a12d7692-1fdc-4e31-accc-a7bce8055086 00:18:13.413 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:13.413 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A12D76921FDC4E31ACCCA7BCE8055086 -i 00:18:13.413 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:13.672 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:13.932 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:13.932 02:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:14.191 nvme0n1 00:18:14.191 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:14.191 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:14.759 nvme1n2 00:18:14.759 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:14.759 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:14.759 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:14.759 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:14.759 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:14.759 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:14.759 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:14.759 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:14.759 02:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:15.018 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ ceab4ce8-8bea-447d-bf9e-f2e03a79482b == \c\e\a\b\4\c\e\8\-\8\b\e\a\-\4\4\7\d\-\b\f\9\e\-\f\2\e\0\3\a\7\9\4\8\2\b ]] 00:18:15.018 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:15.018 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:15.018 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:15.277 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a12d7692-1fdc-4e31-accc-a7bce8055086 == \a\1\2\d\7\6\9\2\-\1\f\d\c\-\4\e\3\1\-\a\c\c\c\-\a\7\b\c\e\8\0\5\5\0\8\6 ]] 00:18:15.277 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:15.536 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:15.536 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid ceab4ce8-8bea-447d-bf9e-f2e03a79482b 00:18:15.536 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:15.536 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g CEAB4CE88BEA447DBF9EF2E03A79482B 00:18:15.536 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:15.536 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g CEAB4CE88BEA447DBF9EF2E03A79482B 00:18:15.536 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.536 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.536 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.536 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.536 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.536 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.536 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.536 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:15.536 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g CEAB4CE88BEA447DBF9EF2E03A79482B 00:18:15.795 [2024-12-14 02:59:30.771579] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:15.795 [2024-12-14 02:59:30.771610] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:15.795 [2024-12-14 02:59:30.771618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.795 request: 00:18:15.795 { 00:18:15.795 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.795 "namespace": { 00:18:15.795 "bdev_name": "invalid", 00:18:15.795 "nsid": 1, 00:18:15.795 "nguid": "CEAB4CE88BEA447DBF9EF2E03A79482B", 00:18:15.795 "no_auto_visible": false, 00:18:15.795 "hide_metadata": false 00:18:15.795 }, 00:18:15.795 "method": "nvmf_subsystem_add_ns", 00:18:15.795 "req_id": 1 00:18:15.795 } 00:18:15.795 Got JSON-RPC error response 00:18:15.795 response: 00:18:15.795 { 00:18:15.795 "code": -32602, 00:18:15.795 "message": "Invalid parameters" 00:18:15.795 } 00:18:15.795 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:15.795 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:15.795 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:15.795 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:15.795 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid ceab4ce8-8bea-447d-bf9e-f2e03a79482b 00:18:15.795 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:15.795 02:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g CEAB4CE88BEA447DBF9EF2E03A79482B -i 00:18:16.053 02:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:17.958 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:17.958 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:17.958 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:18.217 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:18.217 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 290900 00:18:18.217 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 290900 ']' 00:18:18.217 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 290900 00:18:18.217 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:18.217 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.217 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 290900 00:18:18.217 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:18.217 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:18.218 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 290900' 00:18:18.218 killing process with pid 290900 00:18:18.218 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 290900 00:18:18.218 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 290900 00:18:18.477 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:18.736 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:18.736 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:18.736 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:18.736 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:18.736 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:18.736 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:18.736 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:18.736 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:18.736 rmmod nvme_tcp 00:18:18.736 rmmod nvme_fabrics 00:18:18.736 rmmod nvme_keyring 00:18:18.736 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:18.736 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:18.736 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:18.736 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 288952 ']' 00:18:18.736 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 288952 00:18:18.736 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 288952 ']' 00:18:18.736 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 288952 00:18:18.736 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:18.736 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.736 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 288952 00:18:18.995 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:18.995 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:18.995 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 288952' 00:18:18.995 killing process with pid 288952 00:18:18.995 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 288952 00:18:18.995 02:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 288952 00:18:18.995 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:18.995 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:18.995 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:18.995 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:18.995 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:18.995 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:18.995 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:18.995 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:18.995 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:18.995 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.995 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:18.995 02:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:21.534 00:18:21.534 real 0m26.096s 00:18:21.534 user 0m31.104s 00:18:21.534 sys 0m7.039s 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:21.534 ************************************ 00:18:21.534 END TEST nvmf_ns_masking 00:18:21.534 ************************************ 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:21.534 ************************************ 00:18:21.534 START TEST nvmf_nvme_cli 00:18:21.534 ************************************ 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:21.534 * Looking for test storage... 00:18:21.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:21.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.534 --rc genhtml_branch_coverage=1 00:18:21.534 --rc genhtml_function_coverage=1 00:18:21.534 --rc genhtml_legend=1 00:18:21.534 --rc geninfo_all_blocks=1 00:18:21.534 --rc geninfo_unexecuted_blocks=1 00:18:21.534 00:18:21.534 ' 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:21.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.534 --rc genhtml_branch_coverage=1 00:18:21.534 --rc genhtml_function_coverage=1 00:18:21.534 --rc genhtml_legend=1 00:18:21.534 --rc geninfo_all_blocks=1 00:18:21.534 --rc geninfo_unexecuted_blocks=1 00:18:21.534 00:18:21.534 ' 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:21.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.534 --rc genhtml_branch_coverage=1 00:18:21.534 --rc genhtml_function_coverage=1 00:18:21.534 --rc genhtml_legend=1 00:18:21.534 --rc geninfo_all_blocks=1 00:18:21.534 --rc geninfo_unexecuted_blocks=1 00:18:21.534 00:18:21.534 ' 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:21.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.534 --rc genhtml_branch_coverage=1 00:18:21.534 --rc genhtml_function_coverage=1 00:18:21.534 --rc genhtml_legend=1 00:18:21.534 --rc geninfo_all_blocks=1 00:18:21.534 --rc geninfo_unexecuted_blocks=1 00:18:21.534 00:18:21.534 ' 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.534 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:21.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:21.535 02:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:28.111 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:28.111 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:28.111 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:28.112 Found net devices under 0000:af:00.0: cvl_0_0 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:28.112 Found net devices under 0000:af:00.1: cvl_0_1 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:28.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:18:28.112 00:18:28.112 --- 10.0.0.2 ping statistics --- 00:18:28.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.112 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:28.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:18:28.112 00:18:28.112 --- 10.0.0.1 ping statistics --- 00:18:28.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.112 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=295385 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 295385 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 295385 ']' 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:28.112 [2024-12-14 02:59:42.481379] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:28.112 [2024-12-14 02:59:42.481428] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.112 [2024-12-14 02:59:42.560188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:28.112 [2024-12-14 02:59:42.585317] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.112 [2024-12-14 02:59:42.585354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.112 [2024-12-14 02:59:42.585363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.112 [2024-12-14 02:59:42.585371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.112 [2024-12-14 02:59:42.585378] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.112 [2024-12-14 02:59:42.586784] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.112 [2024-12-14 02:59:42.586806] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.112 [2024-12-14 02:59:42.586916] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.112 [2024-12-14 02:59:42.586917] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:28.112 [2024-12-14 02:59:42.730894] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:28.112 Malloc0 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:28.112 Malloc1 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:28.112 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.113 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:28.113 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.113 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:28.113 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.113 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:28.113 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.113 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:28.113 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.113 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:28.113 [2024-12-14 02:59:42.819021] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.113 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.113 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:28.113 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.113 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:28.113 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.113 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:18:28.113 00:18:28.113 Discovery Log Number of Records 2, Generation counter 2 00:18:28.113 =====Discovery Log Entry 0====== 00:18:28.113 trtype: tcp 00:18:28.113 adrfam: ipv4 00:18:28.113 subtype: current discovery subsystem 00:18:28.113 treq: not required 00:18:28.113 portid: 0 00:18:28.113 trsvcid: 4420 00:18:28.113 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:28.113 traddr: 10.0.0.2 00:18:28.113 eflags: explicit discovery connections, duplicate discovery information 00:18:28.113 sectype: none 00:18:28.113 =====Discovery Log Entry 1====== 00:18:28.113 trtype: tcp 00:18:28.113 adrfam: ipv4 00:18:28.113 subtype: nvme subsystem 00:18:28.113 treq: not required 00:18:28.113 portid: 0 00:18:28.113 trsvcid: 4420 00:18:28.113 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:28.113 traddr: 10.0.0.2 00:18:28.113 eflags: none 00:18:28.113 sectype: none 00:18:28.113 02:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:28.113 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:28.113 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:28.113 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:28.113 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:28.113 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:28.113 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:28.113 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:28.113 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:28.113 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:28.113 02:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:29.049 02:59:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:29.049 02:59:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:29.049 02:59:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:29.049 02:59:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:29.049 02:59:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:29.049 02:59:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:31.590 /dev/nvme0n2 ]] 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:31.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:31.590 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:31.850 rmmod nvme_tcp 00:18:31.850 rmmod nvme_fabrics 00:18:31.850 rmmod nvme_keyring 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 295385 ']' 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 295385 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 295385 ']' 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 295385 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 295385 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 295385' 00:18:31.850 killing process with pid 295385 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 295385 00:18:31.850 02:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 295385 00:18:32.110 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:32.110 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:32.110 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:32.110 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:32.110 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:32.110 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:32.110 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:32.110 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:32.110 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:32.110 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.110 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:32.110 02:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:34.647 00:18:34.647 real 0m12.943s 00:18:34.647 user 0m19.535s 00:18:34.647 sys 0m5.167s 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:34.647 ************************************ 00:18:34.647 END TEST nvmf_nvme_cli 00:18:34.647 ************************************ 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:34.647 ************************************ 00:18:34.647 START TEST nvmf_vfio_user 00:18:34.647 ************************************ 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:34.647 * Looking for test storage... 00:18:34.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:34.647 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:34.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.648 --rc genhtml_branch_coverage=1 00:18:34.648 --rc genhtml_function_coverage=1 00:18:34.648 --rc genhtml_legend=1 00:18:34.648 --rc geninfo_all_blocks=1 00:18:34.648 --rc geninfo_unexecuted_blocks=1 00:18:34.648 00:18:34.648 ' 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:34.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.648 --rc genhtml_branch_coverage=1 00:18:34.648 --rc genhtml_function_coverage=1 00:18:34.648 --rc genhtml_legend=1 00:18:34.648 --rc geninfo_all_blocks=1 00:18:34.648 --rc geninfo_unexecuted_blocks=1 00:18:34.648 00:18:34.648 ' 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:34.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.648 --rc genhtml_branch_coverage=1 00:18:34.648 --rc genhtml_function_coverage=1 00:18:34.648 --rc genhtml_legend=1 00:18:34.648 --rc geninfo_all_blocks=1 00:18:34.648 --rc geninfo_unexecuted_blocks=1 00:18:34.648 00:18:34.648 ' 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:34.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.648 --rc genhtml_branch_coverage=1 00:18:34.648 --rc genhtml_function_coverage=1 00:18:34.648 --rc genhtml_legend=1 00:18:34.648 --rc geninfo_all_blocks=1 00:18:34.648 --rc geninfo_unexecuted_blocks=1 00:18:34.648 00:18:34.648 ' 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:34.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=296664 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 296664' 00:18:34.648 Process pid: 296664 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 296664 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 296664 ']' 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.648 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:34.648 [2024-12-14 02:59:49.498755] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:34.648 [2024-12-14 02:59:49.498806] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.648 [2024-12-14 02:59:49.559514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:34.648 [2024-12-14 02:59:49.582614] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.648 [2024-12-14 02:59:49.582651] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.648 [2024-12-14 02:59:49.582658] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.648 [2024-12-14 02:59:49.582664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.648 [2024-12-14 02:59:49.582669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.648 [2024-12-14 02:59:49.584094] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.649 [2024-12-14 02:59:49.584208] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.649 [2024-12-14 02:59:49.584230] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:34.649 [2024-12-14 02:59:49.584231] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.649 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.649 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:34.649 02:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:35.585 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:35.844 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:35.844 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:35.844 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:35.844 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:35.844 02:59:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:36.103 Malloc1 00:18:36.103 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:36.362 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:36.621 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:36.880 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:36.880 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:36.880 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:36.880 Malloc2 00:18:36.880 02:59:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:37.139 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:37.399 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:37.660 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:37.660 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:37.660 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:37.660 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:37.660 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:37.660 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:37.660 [2024-12-14 02:59:52.601423] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:37.660 [2024-12-14 02:59:52.601456] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid297257 ] 00:18:37.660 [2024-12-14 02:59:52.640839] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:37.660 [2024-12-14 02:59:52.649698] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:37.660 [2024-12-14 02:59:52.649718] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7efd97cf1000 00:18:37.660 [2024-12-14 02:59:52.650694] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:37.660 [2024-12-14 02:59:52.651695] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:37.660 [2024-12-14 02:59:52.652704] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:37.660 [2024-12-14 02:59:52.653702] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:37.660 [2024-12-14 02:59:52.654706] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:37.660 [2024-12-14 02:59:52.655716] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:37.660 [2024-12-14 02:59:52.656720] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:37.660 [2024-12-14 02:59:52.657722] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:37.660 [2024-12-14 02:59:52.658737] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:37.660 [2024-12-14 02:59:52.658746] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7efd961f5000 00:18:37.660 [2024-12-14 02:59:52.659660] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:37.660 [2024-12-14 02:59:52.669104] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:37.660 [2024-12-14 02:59:52.669125] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:18:37.660 [2024-12-14 02:59:52.673828] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:37.660 [2024-12-14 02:59:52.673863] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:37.660 [2024-12-14 02:59:52.673933] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:18:37.660 [2024-12-14 02:59:52.673948] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:18:37.660 [2024-12-14 02:59:52.673953] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:18:37.660 [2024-12-14 02:59:52.674831] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:37.660 [2024-12-14 02:59:52.674838] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:18:37.660 [2024-12-14 02:59:52.674845] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:18:37.660 [2024-12-14 02:59:52.675834] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:37.660 [2024-12-14 02:59:52.675842] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:18:37.660 [2024-12-14 02:59:52.675848] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:37.660 [2024-12-14 02:59:52.676840] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:37.660 [2024-12-14 02:59:52.676847] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:37.660 [2024-12-14 02:59:52.677842] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:37.660 [2024-12-14 02:59:52.677849] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:37.660 [2024-12-14 02:59:52.677853] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:37.660 [2024-12-14 02:59:52.677859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:37.660 [2024-12-14 02:59:52.677968] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:18:37.661 [2024-12-14 02:59:52.677973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:37.661 [2024-12-14 02:59:52.677977] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:37.661 [2024-12-14 02:59:52.678850] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:37.661 [2024-12-14 02:59:52.679858] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:37.661 [2024-12-14 02:59:52.680861] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:37.661 [2024-12-14 02:59:52.681864] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:37.661 [2024-12-14 02:59:52.681946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:37.661 [2024-12-14 02:59:52.682875] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:37.661 [2024-12-14 02:59:52.682882] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:37.661 [2024-12-14 02:59:52.682886] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:37.661 [2024-12-14 02:59:52.682902] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:18:37.661 [2024-12-14 02:59:52.682909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:37.661 [2024-12-14 02:59:52.682922] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:37.661 [2024-12-14 02:59:52.682926] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:37.661 [2024-12-14 02:59:52.682929] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:37.661 [2024-12-14 02:59:52.682942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:37.661 [2024-12-14 02:59:52.682984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:37.661 [2024-12-14 02:59:52.682992] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:18:37.661 [2024-12-14 02:59:52.682997] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:18:37.661 [2024-12-14 02:59:52.683001] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:18:37.661 [2024-12-14 02:59:52.683005] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:37.661 [2024-12-14 02:59:52.683009] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:18:37.661 [2024-12-14 02:59:52.683013] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:18:37.661 [2024-12-14 02:59:52.683017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:18:37.661 [2024-12-14 02:59:52.683026] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:37.661 [2024-12-14 02:59:52.683038] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:37.661 [2024-12-14 02:59:52.683051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:37.661 [2024-12-14 02:59:52.683061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.661 [2024-12-14 02:59:52.683068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.661 [2024-12-14 02:59:52.683076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.661 [2024-12-14 02:59:52.683083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.661 [2024-12-14 02:59:52.683087] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:37.661 [2024-12-14 02:59:52.683095] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:37.661 [2024-12-14 02:59:52.683103] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:37.661 [2024-12-14 02:59:52.683112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:37.661 [2024-12-14 02:59:52.683117] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:18:37.661 [2024-12-14 02:59:52.683122] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:37.661 [2024-12-14 02:59:52.683127] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:18:37.661 [2024-12-14 02:59:52.683132] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:37.661 [2024-12-14 02:59:52.683140] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:37.661 [2024-12-14 02:59:52.683150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:37.661 [2024-12-14 02:59:52.683197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:18:37.661 [2024-12-14 02:59:52.683205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:37.661 [2024-12-14 02:59:52.683211] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:37.661 [2024-12-14 02:59:52.683215] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:37.661 [2024-12-14 02:59:52.683218] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:37.661 [2024-12-14 02:59:52.683224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:37.661 [2024-12-14 02:59:52.683243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:37.661 [2024-12-14 02:59:52.683250] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:18:37.661 [2024-12-14 02:59:52.683261] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:18:37.661 [2024-12-14 02:59:52.683268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:37.661 [2024-12-14 02:59:52.683273] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:37.661 [2024-12-14 02:59:52.683277] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:37.661 [2024-12-14 02:59:52.683280] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:37.661 [2024-12-14 02:59:52.683285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:37.661 [2024-12-14 02:59:52.683308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:37.661 [2024-12-14 02:59:52.683323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:37.661 [2024-12-14 02:59:52.683330] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:37.661 [2024-12-14 02:59:52.683336] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:37.661 [2024-12-14 02:59:52.683340] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:37.661 [2024-12-14 02:59:52.683343] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:37.661 [2024-12-14 02:59:52.683348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:37.661 [2024-12-14 02:59:52.683361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:37.661 [2024-12-14 02:59:52.683367] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:37.661 [2024-12-14 02:59:52.683373] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:37.661 [2024-12-14 02:59:52.683379] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:18:37.661 [2024-12-14 02:59:52.683385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:37.661 [2024-12-14 02:59:52.683389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:37.661 [2024-12-14 02:59:52.683393] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:18:37.661 [2024-12-14 02:59:52.683398] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:37.661 [2024-12-14 02:59:52.683402] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:18:37.661 [2024-12-14 02:59:52.683406] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:18:37.661 [2024-12-14 02:59:52.683422] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:37.661 [2024-12-14 02:59:52.683430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:37.661 [2024-12-14 02:59:52.683440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:37.661 [2024-12-14 02:59:52.683450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:37.661 [2024-12-14 02:59:52.683459] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:37.661 [2024-12-14 02:59:52.683467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:37.661 [2024-12-14 02:59:52.683477] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:37.661 [2024-12-14 02:59:52.683487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:37.661 [2024-12-14 02:59:52.683498] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:37.662 [2024-12-14 02:59:52.683502] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:37.662 [2024-12-14 02:59:52.683506] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:37.662 [2024-12-14 02:59:52.683509] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:37.662 [2024-12-14 02:59:52.683511] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:37.662 [2024-12-14 02:59:52.683517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:37.662 [2024-12-14 02:59:52.683523] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:37.662 [2024-12-14 02:59:52.683527] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:37.662 [2024-12-14 02:59:52.683530] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:37.662 [2024-12-14 02:59:52.683536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:37.662 [2024-12-14 02:59:52.683541] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:37.662 [2024-12-14 02:59:52.683545] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:37.662 [2024-12-14 02:59:52.683548] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:37.662 [2024-12-14 02:59:52.683553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:37.662 [2024-12-14 02:59:52.683559] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:37.662 [2024-12-14 02:59:52.683563] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:37.662 [2024-12-14 02:59:52.683566] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:37.662 [2024-12-14 02:59:52.683571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:37.662 [2024-12-14 02:59:52.683577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:37.662 [2024-12-14 02:59:52.683587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:37.662 [2024-12-14 02:59:52.683597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:37.662 [2024-12-14 02:59:52.683603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:37.662 ===================================================== 00:18:37.662 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:37.662 ===================================================== 00:18:37.662 Controller Capabilities/Features 00:18:37.662 ================================ 00:18:37.662 Vendor ID: 4e58 00:18:37.662 Subsystem Vendor ID: 4e58 00:18:37.662 Serial Number: SPDK1 00:18:37.662 Model Number: SPDK bdev Controller 00:18:37.662 Firmware Version: 25.01 00:18:37.662 Recommended Arb Burst: 6 00:18:37.662 IEEE OUI Identifier: 8d 6b 50 00:18:37.662 Multi-path I/O 00:18:37.662 May have multiple subsystem ports: Yes 00:18:37.662 May have multiple controllers: Yes 00:18:37.662 Associated with SR-IOV VF: No 00:18:37.662 Max Data Transfer Size: 131072 00:18:37.662 Max Number of Namespaces: 32 00:18:37.662 Max Number of I/O Queues: 127 00:18:37.662 NVMe Specification Version (VS): 1.3 00:18:37.662 NVMe Specification Version (Identify): 1.3 00:18:37.662 Maximum Queue Entries: 256 00:18:37.662 Contiguous Queues Required: Yes 00:18:37.662 Arbitration Mechanisms Supported 00:18:37.662 Weighted Round Robin: Not Supported 00:18:37.662 Vendor Specific: Not Supported 00:18:37.662 Reset Timeout: 15000 ms 00:18:37.662 Doorbell Stride: 4 bytes 00:18:37.662 NVM Subsystem Reset: Not Supported 00:18:37.662 Command Sets Supported 00:18:37.662 NVM Command Set: Supported 00:18:37.662 Boot Partition: Not Supported 00:18:37.662 Memory Page Size Minimum: 4096 bytes 00:18:37.662 Memory Page Size Maximum: 4096 bytes 00:18:37.662 Persistent Memory Region: Not Supported 00:18:37.662 Optional Asynchronous Events Supported 00:18:37.662 Namespace Attribute Notices: Supported 00:18:37.662 Firmware Activation Notices: Not Supported 00:18:37.662 ANA Change Notices: Not Supported 00:18:37.662 PLE Aggregate Log Change Notices: Not Supported 00:18:37.662 LBA Status Info Alert Notices: Not Supported 00:18:37.662 EGE Aggregate Log Change Notices: Not Supported 00:18:37.662 Normal NVM Subsystem Shutdown event: Not Supported 00:18:37.662 Zone Descriptor Change Notices: Not Supported 00:18:37.662 Discovery Log Change Notices: Not Supported 00:18:37.662 Controller Attributes 00:18:37.662 128-bit Host Identifier: Supported 00:18:37.662 Non-Operational Permissive Mode: Not Supported 00:18:37.662 NVM Sets: Not Supported 00:18:37.662 Read Recovery Levels: Not Supported 00:18:37.662 Endurance Groups: Not Supported 00:18:37.662 Predictable Latency Mode: Not Supported 00:18:37.662 Traffic Based Keep ALive: Not Supported 00:18:37.662 Namespace Granularity: Not Supported 00:18:37.662 SQ Associations: Not Supported 00:18:37.662 UUID List: Not Supported 00:18:37.662 Multi-Domain Subsystem: Not Supported 00:18:37.662 Fixed Capacity Management: Not Supported 00:18:37.662 Variable Capacity Management: Not Supported 00:18:37.662 Delete Endurance Group: Not Supported 00:18:37.662 Delete NVM Set: Not Supported 00:18:37.662 Extended LBA Formats Supported: Not Supported 00:18:37.662 Flexible Data Placement Supported: Not Supported 00:18:37.662 00:18:37.662 Controller Memory Buffer Support 00:18:37.662 ================================ 00:18:37.662 Supported: No 00:18:37.662 00:18:37.662 Persistent Memory Region Support 00:18:37.662 ================================ 00:18:37.662 Supported: No 00:18:37.662 00:18:37.662 Admin Command Set Attributes 00:18:37.662 ============================ 00:18:37.662 Security Send/Receive: Not Supported 00:18:37.662 Format NVM: Not Supported 00:18:37.662 Firmware Activate/Download: Not Supported 00:18:37.662 Namespace Management: Not Supported 00:18:37.662 Device Self-Test: Not Supported 00:18:37.662 Directives: Not Supported 00:18:37.662 NVMe-MI: Not Supported 00:18:37.662 Virtualization Management: Not Supported 00:18:37.662 Doorbell Buffer Config: Not Supported 00:18:37.662 Get LBA Status Capability: Not Supported 00:18:37.662 Command & Feature Lockdown Capability: Not Supported 00:18:37.662 Abort Command Limit: 4 00:18:37.662 Async Event Request Limit: 4 00:18:37.662 Number of Firmware Slots: N/A 00:18:37.662 Firmware Slot 1 Read-Only: N/A 00:18:37.662 Firmware Activation Without Reset: N/A 00:18:37.662 Multiple Update Detection Support: N/A 00:18:37.662 Firmware Update Granularity: No Information Provided 00:18:37.662 Per-Namespace SMART Log: No 00:18:37.662 Asymmetric Namespace Access Log Page: Not Supported 00:18:37.662 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:37.662 Command Effects Log Page: Supported 00:18:37.662 Get Log Page Extended Data: Supported 00:18:37.662 Telemetry Log Pages: Not Supported 00:18:37.662 Persistent Event Log Pages: Not Supported 00:18:37.662 Supported Log Pages Log Page: May Support 00:18:37.662 Commands Supported & Effects Log Page: Not Supported 00:18:37.662 Feature Identifiers & Effects Log Page:May Support 00:18:37.662 NVMe-MI Commands & Effects Log Page: May Support 00:18:37.662 Data Area 4 for Telemetry Log: Not Supported 00:18:37.662 Error Log Page Entries Supported: 128 00:18:37.662 Keep Alive: Supported 00:18:37.662 Keep Alive Granularity: 10000 ms 00:18:37.662 00:18:37.662 NVM Command Set Attributes 00:18:37.662 ========================== 00:18:37.662 Submission Queue Entry Size 00:18:37.662 Max: 64 00:18:37.662 Min: 64 00:18:37.662 Completion Queue Entry Size 00:18:37.662 Max: 16 00:18:37.662 Min: 16 00:18:37.662 Number of Namespaces: 32 00:18:37.662 Compare Command: Supported 00:18:37.662 Write Uncorrectable Command: Not Supported 00:18:37.662 Dataset Management Command: Supported 00:18:37.662 Write Zeroes Command: Supported 00:18:37.662 Set Features Save Field: Not Supported 00:18:37.662 Reservations: Not Supported 00:18:37.662 Timestamp: Not Supported 00:18:37.662 Copy: Supported 00:18:37.662 Volatile Write Cache: Present 00:18:37.662 Atomic Write Unit (Normal): 1 00:18:37.662 Atomic Write Unit (PFail): 1 00:18:37.662 Atomic Compare & Write Unit: 1 00:18:37.662 Fused Compare & Write: Supported 00:18:37.662 Scatter-Gather List 00:18:37.662 SGL Command Set: Supported (Dword aligned) 00:18:37.662 SGL Keyed: Not Supported 00:18:37.662 SGL Bit Bucket Descriptor: Not Supported 00:18:37.662 SGL Metadata Pointer: Not Supported 00:18:37.662 Oversized SGL: Not Supported 00:18:37.662 SGL Metadata Address: Not Supported 00:18:37.662 SGL Offset: Not Supported 00:18:37.662 Transport SGL Data Block: Not Supported 00:18:37.662 Replay Protected Memory Block: Not Supported 00:18:37.662 00:18:37.662 Firmware Slot Information 00:18:37.662 ========================= 00:18:37.662 Active slot: 1 00:18:37.662 Slot 1 Firmware Revision: 25.01 00:18:37.662 00:18:37.662 00:18:37.662 Commands Supported and Effects 00:18:37.662 ============================== 00:18:37.662 Admin Commands 00:18:37.662 -------------- 00:18:37.662 Get Log Page (02h): Supported 00:18:37.662 Identify (06h): Supported 00:18:37.662 Abort (08h): Supported 00:18:37.662 Set Features (09h): Supported 00:18:37.662 Get Features (0Ah): Supported 00:18:37.662 Asynchronous Event Request (0Ch): Supported 00:18:37.662 Keep Alive (18h): Supported 00:18:37.662 I/O Commands 00:18:37.662 ------------ 00:18:37.662 Flush (00h): Supported LBA-Change 00:18:37.663 Write (01h): Supported LBA-Change 00:18:37.663 Read (02h): Supported 00:18:37.663 Compare (05h): Supported 00:18:37.663 Write Zeroes (08h): Supported LBA-Change 00:18:37.663 Dataset Management (09h): Supported LBA-Change 00:18:37.663 Copy (19h): Supported LBA-Change 00:18:37.663 00:18:37.663 Error Log 00:18:37.663 ========= 00:18:37.663 00:18:37.663 Arbitration 00:18:37.663 =========== 00:18:37.663 Arbitration Burst: 1 00:18:37.663 00:18:37.663 Power Management 00:18:37.663 ================ 00:18:37.663 Number of Power States: 1 00:18:37.663 Current Power State: Power State #0 00:18:37.663 Power State #0: 00:18:37.663 Max Power: 0.00 W 00:18:37.663 Non-Operational State: Operational 00:18:37.663 Entry Latency: Not Reported 00:18:37.663 Exit Latency: Not Reported 00:18:37.663 Relative Read Throughput: 0 00:18:37.663 Relative Read Latency: 0 00:18:37.663 Relative Write Throughput: 0 00:18:37.663 Relative Write Latency: 0 00:18:37.663 Idle Power: Not Reported 00:18:37.663 Active Power: Not Reported 00:18:37.663 Non-Operational Permissive Mode: Not Supported 00:18:37.663 00:18:37.663 Health Information 00:18:37.663 ================== 00:18:37.663 Critical Warnings: 00:18:37.663 Available Spare Space: OK 00:18:37.663 Temperature: OK 00:18:37.663 Device Reliability: OK 00:18:37.663 Read Only: No 00:18:37.663 Volatile Memory Backup: OK 00:18:37.663 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:37.663 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:37.663 Available Spare: 0% 00:18:37.663 Available Sp[2024-12-14 02:59:52.683681] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:37.663 [2024-12-14 02:59:52.683690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:37.663 [2024-12-14 02:59:52.683713] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:18:37.663 [2024-12-14 02:59:52.683721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.663 [2024-12-14 02:59:52.683727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.663 [2024-12-14 02:59:52.683732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.663 [2024-12-14 02:59:52.683738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.663 [2024-12-14 02:59:52.687320] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:37.663 [2024-12-14 02:59:52.687330] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:37.663 [2024-12-14 02:59:52.687919] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:37.663 [2024-12-14 02:59:52.687965] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:18:37.663 [2024-12-14 02:59:52.687971] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:18:37.663 [2024-12-14 02:59:52.688926] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:37.663 [2024-12-14 02:59:52.688935] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:18:37.663 [2024-12-14 02:59:52.688982] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:37.663 [2024-12-14 02:59:52.689949] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:37.663 are Threshold: 0% 00:18:37.663 Life Percentage Used: 0% 00:18:37.663 Data Units Read: 0 00:18:37.663 Data Units Written: 0 00:18:37.663 Host Read Commands: 0 00:18:37.663 Host Write Commands: 0 00:18:37.663 Controller Busy Time: 0 minutes 00:18:37.663 Power Cycles: 0 00:18:37.663 Power On Hours: 0 hours 00:18:37.663 Unsafe Shutdowns: 0 00:18:37.663 Unrecoverable Media Errors: 0 00:18:37.663 Lifetime Error Log Entries: 0 00:18:37.663 Warning Temperature Time: 0 minutes 00:18:37.663 Critical Temperature Time: 0 minutes 00:18:37.663 00:18:37.663 Number of Queues 00:18:37.663 ================ 00:18:37.663 Number of I/O Submission Queues: 127 00:18:37.663 Number of I/O Completion Queues: 127 00:18:37.663 00:18:37.663 Active Namespaces 00:18:37.663 ================= 00:18:37.663 Namespace ID:1 00:18:37.663 Error Recovery Timeout: Unlimited 00:18:37.663 Command Set Identifier: NVM (00h) 00:18:37.663 Deallocate: Supported 00:18:37.663 Deallocated/Unwritten Error: Not Supported 00:18:37.663 Deallocated Read Value: Unknown 00:18:37.663 Deallocate in Write Zeroes: Not Supported 00:18:37.663 Deallocated Guard Field: 0xFFFF 00:18:37.663 Flush: Supported 00:18:37.663 Reservation: Supported 00:18:37.663 Namespace Sharing Capabilities: Multiple Controllers 00:18:37.663 Size (in LBAs): 131072 (0GiB) 00:18:37.663 Capacity (in LBAs): 131072 (0GiB) 00:18:37.663 Utilization (in LBAs): 131072 (0GiB) 00:18:37.663 NGUID: 0649AF08D9E44862A305F138A06CAA7D 00:18:37.663 UUID: 0649af08-d9e4-4862-a305-f138a06caa7d 00:18:37.663 Thin Provisioning: Not Supported 00:18:37.663 Per-NS Atomic Units: Yes 00:18:37.663 Atomic Boundary Size (Normal): 0 00:18:37.663 Atomic Boundary Size (PFail): 0 00:18:37.663 Atomic Boundary Offset: 0 00:18:37.663 Maximum Single Source Range Length: 65535 00:18:37.663 Maximum Copy Length: 65535 00:18:37.663 Maximum Source Range Count: 1 00:18:37.663 NGUID/EUI64 Never Reused: No 00:18:37.663 Namespace Write Protected: No 00:18:37.663 Number of LBA Formats: 1 00:18:37.663 Current LBA Format: LBA Format #00 00:18:37.663 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:37.663 00:18:37.663 02:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:37.922 [2024-12-14 02:59:52.906094] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:43.196 Initializing NVMe Controllers 00:18:43.196 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:43.196 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:43.196 Initialization complete. Launching workers. 00:18:43.196 ======================================================== 00:18:43.196 Latency(us) 00:18:43.196 Device Information : IOPS MiB/s Average min max 00:18:43.196 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39966.40 156.12 3202.82 960.82 8278.32 00:18:43.196 ======================================================== 00:18:43.196 Total : 39966.40 156.12 3202.82 960.82 8278.32 00:18:43.196 00:18:43.196 [2024-12-14 02:59:57.927686] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:43.196 02:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:43.196 [2024-12-14 02:59:58.163783] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:48.543 Initializing NVMe Controllers 00:18:48.543 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:48.543 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:48.544 Initialization complete. Launching workers. 00:18:48.544 ======================================================== 00:18:48.544 Latency(us) 00:18:48.544 Device Information : IOPS MiB/s Average min max 00:18:48.544 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15820.80 61.80 8100.92 6984.98 15963.14 00:18:48.544 ======================================================== 00:18:48.544 Total : 15820.80 61.80 8100.92 6984.98 15963.14 00:18:48.544 00:18:48.544 [2024-12-14 03:00:03.201455] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:48.544 03:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:48.544 [2024-12-14 03:00:03.411454] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:53.823 [2024-12-14 03:00:08.487605] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:53.823 Initializing NVMe Controllers 00:18:53.823 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:53.823 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:53.823 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:53.823 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:53.823 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:53.823 Initialization complete. Launching workers. 00:18:53.823 Starting thread on core 2 00:18:53.823 Starting thread on core 3 00:18:53.823 Starting thread on core 1 00:18:53.823 03:00:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:53.823 [2024-12-14 03:00:08.789710] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:57.137 [2024-12-14 03:00:11.870415] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:57.137 Initializing NVMe Controllers 00:18:57.137 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:57.138 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:57.138 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:57.138 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:57.138 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:57.138 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:57.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:57.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:57.138 Initialization complete. Launching workers. 00:18:57.138 Starting thread on core 1 with urgent priority queue 00:18:57.138 Starting thread on core 2 with urgent priority queue 00:18:57.138 Starting thread on core 3 with urgent priority queue 00:18:57.138 Starting thread on core 0 with urgent priority queue 00:18:57.138 SPDK bdev Controller (SPDK1 ) core 0: 6555.67 IO/s 15.25 secs/100000 ios 00:18:57.138 SPDK bdev Controller (SPDK1 ) core 1: 5840.33 IO/s 17.12 secs/100000 ios 00:18:57.138 SPDK bdev Controller (SPDK1 ) core 2: 5801.33 IO/s 17.24 secs/100000 ios 00:18:57.138 SPDK bdev Controller (SPDK1 ) core 3: 7643.00 IO/s 13.08 secs/100000 ios 00:18:57.138 ======================================================== 00:18:57.138 00:18:57.138 03:00:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:57.138 [2024-12-14 03:00:12.155772] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:57.138 Initializing NVMe Controllers 00:18:57.138 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:57.138 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:57.138 Namespace ID: 1 size: 0GB 00:18:57.138 Initialization complete. 00:18:57.138 INFO: using host memory buffer for IO 00:18:57.138 Hello world! 00:18:57.138 [2024-12-14 03:00:12.190980] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:57.138 03:00:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:57.399 [2024-12-14 03:00:12.468695] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:58.783 Initializing NVMe Controllers 00:18:58.783 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:58.783 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:58.783 Initialization complete. Launching workers. 00:18:58.783 submit (in ns) avg, min, max = 6290.4, 3158.1, 3999961.9 00:18:58.783 complete (in ns) avg, min, max = 20470.0, 1721.9, 4005798.1 00:18:58.783 00:18:58.783 Submit histogram 00:18:58.783 ================ 00:18:58.783 Range in us Cumulative Count 00:18:58.783 3.154 - 3.170: 0.0546% ( 9) 00:18:58.783 3.170 - 3.185: 0.0971% ( 7) 00:18:58.783 3.185 - 3.200: 0.1820% ( 14) 00:18:58.783 3.200 - 3.215: 0.7341% ( 91) 00:18:58.783 3.215 - 3.230: 3.2276% ( 411) 00:18:58.783 3.230 - 3.246: 8.6331% ( 891) 00:18:58.783 3.246 - 3.261: 14.2692% ( 929) 00:18:58.783 3.261 - 3.276: 20.9185% ( 1096) 00:18:58.783 3.276 - 3.291: 28.2655% ( 1211) 00:18:58.783 3.291 - 3.307: 34.9997% ( 1110) 00:18:58.783 3.307 - 3.322: 40.1869% ( 855) 00:18:58.783 3.322 - 3.337: 44.9493% ( 785) 00:18:58.783 3.337 - 3.352: 49.6997% ( 783) 00:18:58.783 3.352 - 3.368: 53.2488% ( 585) 00:18:58.783 3.368 - 3.383: 59.2186% ( 984) 00:18:58.783 3.383 - 3.398: 66.0681% ( 1129) 00:18:58.783 3.398 - 3.413: 71.3948% ( 878) 00:18:58.783 3.413 - 3.429: 77.3221% ( 977) 00:18:58.783 3.429 - 3.444: 81.8298% ( 743) 00:18:58.784 3.444 - 3.459: 84.7661% ( 484) 00:18:58.784 3.459 - 3.474: 86.1736% ( 232) 00:18:58.784 3.474 - 3.490: 87.0291% ( 141) 00:18:58.784 3.490 - 3.505: 87.6115% ( 96) 00:18:58.784 3.505 - 3.520: 88.2060% ( 98) 00:18:58.784 3.520 - 3.535: 88.9401% ( 121) 00:18:58.784 3.535 - 3.550: 89.8319% ( 147) 00:18:58.784 3.550 - 3.566: 90.8269% ( 164) 00:18:58.784 3.566 - 3.581: 91.7612% ( 154) 00:18:58.784 3.581 - 3.596: 92.4892% ( 120) 00:18:58.784 3.596 - 3.611: 93.2719% ( 129) 00:18:58.784 3.611 - 3.627: 93.9695% ( 115) 00:18:58.784 3.627 - 3.642: 94.8674% ( 148) 00:18:58.784 3.642 - 3.657: 95.6258% ( 125) 00:18:58.784 3.657 - 3.672: 96.4509% ( 136) 00:18:58.784 3.672 - 3.688: 97.2092% ( 125) 00:18:58.784 3.688 - 3.703: 97.9373% ( 120) 00:18:58.784 3.703 - 3.718: 98.3316% ( 65) 00:18:58.784 3.718 - 3.733: 98.7260% ( 65) 00:18:58.784 3.733 - 3.749: 99.0293% ( 50) 00:18:58.784 3.749 - 3.764: 99.1749% ( 24) 00:18:58.784 3.764 - 3.779: 99.3326% ( 26) 00:18:58.784 3.779 - 3.794: 99.4115% ( 13) 00:18:58.784 3.794 - 3.810: 99.4783% ( 11) 00:18:58.784 3.810 - 3.825: 99.5268% ( 8) 00:18:58.784 3.825 - 3.840: 99.5632% ( 6) 00:18:58.784 3.840 - 3.855: 99.5753% ( 2) 00:18:58.784 3.855 - 3.870: 99.5935% ( 3) 00:18:58.784 3.870 - 3.886: 99.6057% ( 2) 00:18:58.784 3.886 - 3.901: 99.6178% ( 2) 00:18:58.784 3.901 - 3.931: 99.6360% ( 3) 00:18:58.784 3.931 - 3.962: 99.6421% ( 1) 00:18:58.784 3.962 - 3.992: 99.6481% ( 1) 00:18:58.784 3.992 - 4.023: 99.6542% ( 1) 00:18:58.784 4.023 - 4.053: 99.6663% ( 2) 00:18:58.784 4.084 - 4.114: 99.6724% ( 1) 00:18:58.784 4.206 - 4.236: 99.6785% ( 1) 00:18:58.784 5.608 - 5.638: 99.6845% ( 1) 00:18:58.784 5.699 - 5.730: 99.6906% ( 1) 00:18:58.784 5.790 - 5.821: 99.6967% ( 1) 00:18:58.784 5.851 - 5.882: 99.7027% ( 1) 00:18:58.784 5.882 - 5.912: 99.7088% ( 1) 00:18:58.784 5.912 - 5.943: 99.7149% ( 1) 00:18:58.784 5.943 - 5.973: 99.7209% ( 1) 00:18:58.784 6.217 - 6.248: 99.7270% ( 1) 00:18:58.784 6.248 - 6.278: 99.7331% ( 1) 00:18:58.784 6.309 - 6.339: 99.7391% ( 1) 00:18:58.784 6.430 - 6.461: 99.7452% ( 1) 00:18:58.784 6.491 - 6.522: 99.7513% ( 1) 00:18:58.784 6.613 - 6.644: 99.7573% ( 1) 00:18:58.784 6.644 - 6.674: 99.7695% ( 2) 00:18:58.784 6.918 - 6.949: 99.7755% ( 1) 00:18:58.784 6.949 - 6.979: 99.7816% ( 1) 00:18:58.784 7.010 - 7.040: 99.7877% ( 1) 00:18:58.784 7.040 - 7.070: 99.7937% ( 1) 00:18:58.784 7.070 - 7.101: 99.7998% ( 1) 00:18:58.784 7.101 - 7.131: 99.8119% ( 2) 00:18:58.784 7.223 - 7.253: 99.8180% ( 1) 00:18:58.784 7.284 - 7.314: 99.8241% ( 1) 00:18:58.784 7.345 - 7.375: 99.8301% ( 1) 00:18:58.784 [2024-12-14 03:00:13.490616] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:58.784 7.436 - 7.467: 99.8362% ( 1) 00:18:58.784 7.497 - 7.528: 99.8423% ( 1) 00:18:58.784 7.528 - 7.558: 99.8483% ( 1) 00:18:58.784 7.558 - 7.589: 99.8544% ( 1) 00:18:58.784 7.619 - 7.650: 99.8665% ( 2) 00:18:58.784 7.710 - 7.741: 99.8787% ( 2) 00:18:58.784 7.985 - 8.046: 99.8847% ( 1) 00:18:58.784 8.046 - 8.107: 99.8908% ( 1) 00:18:58.784 8.107 - 8.168: 99.8969% ( 1) 00:18:58.784 8.229 - 8.290: 99.9029% ( 1) 00:18:58.784 8.411 - 8.472: 99.9090% ( 1) 00:18:58.784 8.838 - 8.899: 99.9151% ( 1) 00:18:58.784 9.265 - 9.326: 99.9211% ( 1) 00:18:58.784 9.874 - 9.935: 99.9272% ( 1) 00:18:58.784 3994.575 - 4025.783: 100.0000% ( 12) 00:18:58.784 00:18:58.784 Complete histogram 00:18:58.784 ================== 00:18:58.784 Range in us Cumulative Count 00:18:58.784 1.722 - 1.730: 0.0607% ( 10) 00:18:58.784 1.730 - 1.737: 0.1941% ( 22) 00:18:58.784 1.737 - 1.745: 0.3033% ( 18) 00:18:58.784 1.745 - 1.752: 0.3276% ( 4) 00:18:58.784 1.752 - 1.760: 0.3458% ( 3) 00:18:58.784 1.760 - 1.768: 0.5521% ( 34) 00:18:58.784 1.768 - 1.775: 3.9374% ( 558) 00:18:58.784 1.775 - 1.783: 21.8104% ( 2946) 00:18:58.784 1.783 - 1.790: 51.6108% ( 4912) 00:18:58.784 1.790 - 1.798: 72.0743% ( 3373) 00:18:58.784 1.798 - 1.806: 79.7488% ( 1265) 00:18:58.784 1.806 - 1.813: 83.8258% ( 672) 00:18:58.784 1.813 - 1.821: 86.9077% ( 508) 00:18:58.784 1.821 - 1.829: 88.6186% ( 282) 00:18:58.784 1.829 - 1.836: 90.0261% ( 232) 00:18:58.784 1.836 - 1.844: 91.7855% ( 290) 00:18:58.784 1.844 - 1.851: 93.7147% ( 318) 00:18:58.784 1.851 - 1.859: 95.2800% ( 258) 00:18:58.784 1.859 - 1.867: 96.4752% ( 197) 00:18:58.784 1.867 - 1.874: 97.2638% ( 130) 00:18:58.784 1.874 - 1.882: 97.7553% ( 81) 00:18:58.784 1.882 - 1.890: 98.1011% ( 57) 00:18:58.784 1.890 - 1.897: 98.3377% ( 39) 00:18:58.784 1.897 - 1.905: 98.4833% ( 24) 00:18:58.784 1.905 - 1.912: 98.6046% ( 20) 00:18:58.784 1.912 - 1.920: 98.6471% ( 7) 00:18:58.784 1.920 - 1.928: 98.6956% ( 8) 00:18:58.784 1.928 - 1.935: 98.7138% ( 3) 00:18:58.784 1.935 - 1.943: 98.7624% ( 8) 00:18:58.784 1.943 - 1.950: 98.8109% ( 8) 00:18:58.784 1.950 - 1.966: 98.8291% ( 3) 00:18:58.784 1.966 - 1.981: 98.8473% ( 3) 00:18:58.784 1.996 - 2.011: 98.8655% ( 3) 00:18:58.784 2.011 - 2.027: 99.0596% ( 32) 00:18:58.784 2.027 - 2.042: 99.2295% ( 28) 00:18:58.784 2.042 - 2.057: 99.3023% ( 12) 00:18:58.784 2.057 - 2.072: 99.3144% ( 2) 00:18:58.784 2.072 - 2.088: 99.3266% ( 2) 00:18:58.784 2.103 - 2.118: 99.3387% ( 2) 00:18:58.784 2.118 - 2.133: 99.3569% ( 3) 00:18:58.784 2.210 - 2.225: 99.3630% ( 1) 00:18:58.784 2.301 - 2.316: 99.3751% ( 2) 00:18:58.784 3.840 - 3.855: 99.3812% ( 1) 00:18:58.784 4.206 - 4.236: 99.3872% ( 1) 00:18:58.784 4.358 - 4.389: 99.3933% ( 1) 00:18:58.784 4.389 - 4.419: 99.3994% ( 1) 00:18:58.784 4.724 - 4.754: 99.4054% ( 1) 00:18:58.784 4.846 - 4.876: 99.4115% ( 1) 00:18:58.784 4.907 - 4.937: 99.4236% ( 2) 00:18:58.784 5.211 - 5.242: 99.4297% ( 1) 00:18:58.784 5.242 - 5.272: 99.4358% ( 1) 00:18:58.784 5.303 - 5.333: 99.4418% ( 1) 00:18:58.784 5.425 - 5.455: 99.4600% ( 3) 00:18:58.784 5.638 - 5.669: 99.4661% ( 1) 00:18:58.784 5.943 - 5.973: 99.4722% ( 1) 00:18:58.784 5.973 - 6.004: 99.4783% ( 1) 00:18:58.784 6.065 - 6.095: 99.4843% ( 1) 00:18:58.784 6.126 - 6.156: 99.4904% ( 1) 00:18:58.784 6.491 - 6.522: 99.4965% ( 1) 00:18:58.784 6.613 - 6.644: 99.5086% ( 2) 00:18:58.784 6.705 - 6.735: 99.5147% ( 1) 00:18:58.784 7.070 - 7.101: 99.5207% ( 1) 00:18:58.784 7.223 - 7.253: 99.5268% ( 1) 00:18:58.784 14.629 - 14.690: 99.5329% ( 1) 00:18:58.784 3994.575 - 4025.783: 100.0000% ( 77) 00:18:58.784 00:18:58.784 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:58.784 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:58.784 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:58.784 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:58.784 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:58.784 [ 00:18:58.784 { 00:18:58.784 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:58.784 "subtype": "Discovery", 00:18:58.784 "listen_addresses": [], 00:18:58.784 "allow_any_host": true, 00:18:58.784 "hosts": [] 00:18:58.784 }, 00:18:58.784 { 00:18:58.784 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:58.784 "subtype": "NVMe", 00:18:58.784 "listen_addresses": [ 00:18:58.784 { 00:18:58.784 "trtype": "VFIOUSER", 00:18:58.784 "adrfam": "IPv4", 00:18:58.784 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:58.784 "trsvcid": "0" 00:18:58.784 } 00:18:58.784 ], 00:18:58.784 "allow_any_host": true, 00:18:58.784 "hosts": [], 00:18:58.784 "serial_number": "SPDK1", 00:18:58.784 "model_number": "SPDK bdev Controller", 00:18:58.784 "max_namespaces": 32, 00:18:58.784 "min_cntlid": 1, 00:18:58.784 "max_cntlid": 65519, 00:18:58.784 "namespaces": [ 00:18:58.784 { 00:18:58.784 "nsid": 1, 00:18:58.784 "bdev_name": "Malloc1", 00:18:58.784 "name": "Malloc1", 00:18:58.784 "nguid": "0649AF08D9E44862A305F138A06CAA7D", 00:18:58.784 "uuid": "0649af08-d9e4-4862-a305-f138a06caa7d" 00:18:58.784 } 00:18:58.784 ] 00:18:58.784 }, 00:18:58.784 { 00:18:58.784 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:58.784 "subtype": "NVMe", 00:18:58.784 "listen_addresses": [ 00:18:58.784 { 00:18:58.784 "trtype": "VFIOUSER", 00:18:58.784 "adrfam": "IPv4", 00:18:58.784 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:58.784 "trsvcid": "0" 00:18:58.784 } 00:18:58.784 ], 00:18:58.784 "allow_any_host": true, 00:18:58.784 "hosts": [], 00:18:58.785 "serial_number": "SPDK2", 00:18:58.785 "model_number": "SPDK bdev Controller", 00:18:58.785 "max_namespaces": 32, 00:18:58.785 "min_cntlid": 1, 00:18:58.785 "max_cntlid": 65519, 00:18:58.785 "namespaces": [ 00:18:58.785 { 00:18:58.785 "nsid": 1, 00:18:58.785 "bdev_name": "Malloc2", 00:18:58.785 "name": "Malloc2", 00:18:58.785 "nguid": "F3C4D9912C194E59A1BEFEBDD91B45A9", 00:18:58.785 "uuid": "f3c4d991-2c19-4e59-a1be-febdd91b45a9" 00:18:58.785 } 00:18:58.785 ] 00:18:58.785 } 00:18:58.785 ] 00:18:58.785 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:58.785 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:58.785 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=301148 00:18:58.785 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:58.785 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:58.785 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:58.785 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:18:58.785 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:18:58.785 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:58.785 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:58.785 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:18:58.785 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:18:58.785 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:58.785 [2024-12-14 03:00:13.879746] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:59.054 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:59.054 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:59.054 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:59.054 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:59.054 03:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:59.054 Malloc3 00:18:59.054 03:00:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:59.323 [2024-12-14 03:00:14.348269] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:59.323 03:00:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:59.323 Asynchronous Event Request test 00:18:59.323 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:59.323 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:59.323 Registering asynchronous event callbacks... 00:18:59.323 Starting namespace attribute notice tests for all controllers... 00:18:59.323 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:59.323 aer_cb - Changed Namespace 00:18:59.323 Cleaning up... 00:18:59.593 [ 00:18:59.593 { 00:18:59.593 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:59.593 "subtype": "Discovery", 00:18:59.593 "listen_addresses": [], 00:18:59.593 "allow_any_host": true, 00:18:59.593 "hosts": [] 00:18:59.593 }, 00:18:59.593 { 00:18:59.593 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:59.593 "subtype": "NVMe", 00:18:59.593 "listen_addresses": [ 00:18:59.593 { 00:18:59.593 "trtype": "VFIOUSER", 00:18:59.593 "adrfam": "IPv4", 00:18:59.593 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:59.593 "trsvcid": "0" 00:18:59.593 } 00:18:59.593 ], 00:18:59.593 "allow_any_host": true, 00:18:59.593 "hosts": [], 00:18:59.593 "serial_number": "SPDK1", 00:18:59.593 "model_number": "SPDK bdev Controller", 00:18:59.593 "max_namespaces": 32, 00:18:59.593 "min_cntlid": 1, 00:18:59.593 "max_cntlid": 65519, 00:18:59.593 "namespaces": [ 00:18:59.593 { 00:18:59.593 "nsid": 1, 00:18:59.593 "bdev_name": "Malloc1", 00:18:59.593 "name": "Malloc1", 00:18:59.593 "nguid": "0649AF08D9E44862A305F138A06CAA7D", 00:18:59.593 "uuid": "0649af08-d9e4-4862-a305-f138a06caa7d" 00:18:59.593 }, 00:18:59.593 { 00:18:59.593 "nsid": 2, 00:18:59.593 "bdev_name": "Malloc3", 00:18:59.593 "name": "Malloc3", 00:18:59.593 "nguid": "E5F9152E2AEE4781BCD71C4885082E67", 00:18:59.593 "uuid": "e5f9152e-2aee-4781-bcd7-1c4885082e67" 00:18:59.593 } 00:18:59.593 ] 00:18:59.593 }, 00:18:59.593 { 00:18:59.593 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:59.593 "subtype": "NVMe", 00:18:59.593 "listen_addresses": [ 00:18:59.593 { 00:18:59.593 "trtype": "VFIOUSER", 00:18:59.593 "adrfam": "IPv4", 00:18:59.593 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:59.593 "trsvcid": "0" 00:18:59.593 } 00:18:59.593 ], 00:18:59.593 "allow_any_host": true, 00:18:59.593 "hosts": [], 00:18:59.593 "serial_number": "SPDK2", 00:18:59.593 "model_number": "SPDK bdev Controller", 00:18:59.593 "max_namespaces": 32, 00:18:59.593 "min_cntlid": 1, 00:18:59.593 "max_cntlid": 65519, 00:18:59.594 "namespaces": [ 00:18:59.594 { 00:18:59.594 "nsid": 1, 00:18:59.594 "bdev_name": "Malloc2", 00:18:59.594 "name": "Malloc2", 00:18:59.594 "nguid": "F3C4D9912C194E59A1BEFEBDD91B45A9", 00:18:59.594 "uuid": "f3c4d991-2c19-4e59-a1be-febdd91b45a9" 00:18:59.594 } 00:18:59.594 ] 00:18:59.594 } 00:18:59.594 ] 00:18:59.594 03:00:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 301148 00:18:59.594 03:00:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:59.594 03:00:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:59.594 03:00:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:59.594 03:00:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:59.594 [2024-12-14 03:00:14.608984] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:59.594 [2024-12-14 03:00:14.609024] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid301374 ] 00:18:59.594 [2024-12-14 03:00:14.650639] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:59.594 [2024-12-14 03:00:14.652870] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:59.594 [2024-12-14 03:00:14.652890] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ffac1f3a000 00:18:59.594 [2024-12-14 03:00:14.653875] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:59.594 [2024-12-14 03:00:14.654881] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:59.594 [2024-12-14 03:00:14.655886] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:59.594 [2024-12-14 03:00:14.656898] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:59.594 [2024-12-14 03:00:14.657909] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:59.594 [2024-12-14 03:00:14.658911] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:59.594 [2024-12-14 03:00:14.659916] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:59.594 [2024-12-14 03:00:14.660923] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:59.594 [2024-12-14 03:00:14.661932] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:59.594 [2024-12-14 03:00:14.661942] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ffac0c44000 00:18:59.594 [2024-12-14 03:00:14.662856] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:59.594 [2024-12-14 03:00:14.675574] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:59.594 [2024-12-14 03:00:14.675600] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:59.594 [2024-12-14 03:00:14.680690] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:59.594 [2024-12-14 03:00:14.680723] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:59.594 [2024-12-14 03:00:14.680792] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:59.594 [2024-12-14 03:00:14.680807] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:59.594 [2024-12-14 03:00:14.680812] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:59.594 [2024-12-14 03:00:14.681693] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:59.594 [2024-12-14 03:00:14.681702] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:59.594 [2024-12-14 03:00:14.681709] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:59.594 [2024-12-14 03:00:14.682714] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:59.594 [2024-12-14 03:00:14.682722] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:59.594 [2024-12-14 03:00:14.682729] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:59.594 [2024-12-14 03:00:14.683715] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:59.594 [2024-12-14 03:00:14.683724] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:59.594 [2024-12-14 03:00:14.684718] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:59.594 [2024-12-14 03:00:14.684728] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:59.594 [2024-12-14 03:00:14.684733] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:59.594 [2024-12-14 03:00:14.684739] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:59.594 [2024-12-14 03:00:14.684846] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:59.594 [2024-12-14 03:00:14.684850] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:59.594 [2024-12-14 03:00:14.684855] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:59.594 [2024-12-14 03:00:14.685730] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:59.594 [2024-12-14 03:00:14.686734] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:59.594 [2024-12-14 03:00:14.687744] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:59.594 [2024-12-14 03:00:14.688743] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:59.594 [2024-12-14 03:00:14.688780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:59.594 [2024-12-14 03:00:14.689751] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:59.594 [2024-12-14 03:00:14.689760] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:59.594 [2024-12-14 03:00:14.689764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:59.594 [2024-12-14 03:00:14.689781] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:59.594 [2024-12-14 03:00:14.689791] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:59.594 [2024-12-14 03:00:14.689800] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:59.594 [2024-12-14 03:00:14.689805] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:59.594 [2024-12-14 03:00:14.689808] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:59.594 [2024-12-14 03:00:14.689819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:59.594 [2024-12-14 03:00:14.697319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:59.594 [2024-12-14 03:00:14.697329] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:59.594 [2024-12-14 03:00:14.697333] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:59.594 [2024-12-14 03:00:14.697337] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:59.594 [2024-12-14 03:00:14.697341] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:59.594 [2024-12-14 03:00:14.697348] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:59.594 [2024-12-14 03:00:14.697352] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:59.594 [2024-12-14 03:00:14.697356] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:59.594 [2024-12-14 03:00:14.697365] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:59.594 [2024-12-14 03:00:14.697375] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:59.594 [2024-12-14 03:00:14.705317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:59.594 [2024-12-14 03:00:14.705329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:59.594 [2024-12-14 03:00:14.705337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:59.594 [2024-12-14 03:00:14.705344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:59.594 [2024-12-14 03:00:14.705351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:59.594 [2024-12-14 03:00:14.705356] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:59.594 [2024-12-14 03:00:14.705364] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:59.594 [2024-12-14 03:00:14.705372] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:59.594 [2024-12-14 03:00:14.713317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:59.594 [2024-12-14 03:00:14.713324] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:59.595 [2024-12-14 03:00:14.713329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:59.595 [2024-12-14 03:00:14.713335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:59.595 [2024-12-14 03:00:14.713340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:59.595 [2024-12-14 03:00:14.713348] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:59.868 [2024-12-14 03:00:14.721317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:59.868 [2024-12-14 03:00:14.721368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:59.868 [2024-12-14 03:00:14.721378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:59.868 [2024-12-14 03:00:14.721385] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:59.868 [2024-12-14 03:00:14.721389] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:59.868 [2024-12-14 03:00:14.721403] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:59.868 [2024-12-14 03:00:14.721409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:59.868 [2024-12-14 03:00:14.729317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:59.868 [2024-12-14 03:00:14.729328] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:59.868 [2024-12-14 03:00:14.729336] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:59.868 [2024-12-14 03:00:14.729343] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:59.868 [2024-12-14 03:00:14.729349] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:59.868 [2024-12-14 03:00:14.729353] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:59.868 [2024-12-14 03:00:14.729355] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:59.868 [2024-12-14 03:00:14.729361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:59.868 [2024-12-14 03:00:14.737317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:59.868 [2024-12-14 03:00:14.737330] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:59.868 [2024-12-14 03:00:14.737337] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:59.868 [2024-12-14 03:00:14.737343] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:59.868 [2024-12-14 03:00:14.737347] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:59.868 [2024-12-14 03:00:14.737350] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:59.868 [2024-12-14 03:00:14.737356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:59.868 [2024-12-14 03:00:14.745317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:59.868 [2024-12-14 03:00:14.745326] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:59.868 [2024-12-14 03:00:14.745332] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:59.868 [2024-12-14 03:00:14.745339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:59.868 [2024-12-14 03:00:14.745344] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:59.868 [2024-12-14 03:00:14.745348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:59.868 [2024-12-14 03:00:14.745353] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:59.868 [2024-12-14 03:00:14.745357] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:59.868 [2024-12-14 03:00:14.745361] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:59.868 [2024-12-14 03:00:14.745367] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:59.868 [2024-12-14 03:00:14.745382] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:59.868 [2024-12-14 03:00:14.753318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:59.868 [2024-12-14 03:00:14.753331] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:59.868 [2024-12-14 03:00:14.761318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:59.868 [2024-12-14 03:00:14.761330] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:59.868 [2024-12-14 03:00:14.769317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:59.868 [2024-12-14 03:00:14.769328] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:59.868 [2024-12-14 03:00:14.777317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:59.868 [2024-12-14 03:00:14.777331] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:59.868 [2024-12-14 03:00:14.777336] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:59.868 [2024-12-14 03:00:14.777339] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:59.868 [2024-12-14 03:00:14.777342] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:59.868 [2024-12-14 03:00:14.777345] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:59.868 [2024-12-14 03:00:14.777350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:59.868 [2024-12-14 03:00:14.777357] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:59.868 [2024-12-14 03:00:14.777360] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:59.868 [2024-12-14 03:00:14.777363] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:59.868 [2024-12-14 03:00:14.777369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:59.868 [2024-12-14 03:00:14.777375] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:59.868 [2024-12-14 03:00:14.777378] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:59.868 [2024-12-14 03:00:14.777381] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:59.869 [2024-12-14 03:00:14.777387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:59.869 [2024-12-14 03:00:14.777393] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:59.869 [2024-12-14 03:00:14.777397] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:59.869 [2024-12-14 03:00:14.777400] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:59.869 [2024-12-14 03:00:14.777405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:59.869 [2024-12-14 03:00:14.785316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:59.869 [2024-12-14 03:00:14.785331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:59.869 [2024-12-14 03:00:14.785340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:59.869 [2024-12-14 03:00:14.785346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:59.869 ===================================================== 00:18:59.869 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:59.869 ===================================================== 00:18:59.869 Controller Capabilities/Features 00:18:59.869 ================================ 00:18:59.869 Vendor ID: 4e58 00:18:59.869 Subsystem Vendor ID: 4e58 00:18:59.869 Serial Number: SPDK2 00:18:59.869 Model Number: SPDK bdev Controller 00:18:59.869 Firmware Version: 25.01 00:18:59.869 Recommended Arb Burst: 6 00:18:59.869 IEEE OUI Identifier: 8d 6b 50 00:18:59.869 Multi-path I/O 00:18:59.869 May have multiple subsystem ports: Yes 00:18:59.869 May have multiple controllers: Yes 00:18:59.869 Associated with SR-IOV VF: No 00:18:59.869 Max Data Transfer Size: 131072 00:18:59.869 Max Number of Namespaces: 32 00:18:59.869 Max Number of I/O Queues: 127 00:18:59.869 NVMe Specification Version (VS): 1.3 00:18:59.869 NVMe Specification Version (Identify): 1.3 00:18:59.869 Maximum Queue Entries: 256 00:18:59.869 Contiguous Queues Required: Yes 00:18:59.869 Arbitration Mechanisms Supported 00:18:59.869 Weighted Round Robin: Not Supported 00:18:59.869 Vendor Specific: Not Supported 00:18:59.869 Reset Timeout: 15000 ms 00:18:59.869 Doorbell Stride: 4 bytes 00:18:59.869 NVM Subsystem Reset: Not Supported 00:18:59.869 Command Sets Supported 00:18:59.869 NVM Command Set: Supported 00:18:59.869 Boot Partition: Not Supported 00:18:59.869 Memory Page Size Minimum: 4096 bytes 00:18:59.869 Memory Page Size Maximum: 4096 bytes 00:18:59.869 Persistent Memory Region: Not Supported 00:18:59.869 Optional Asynchronous Events Supported 00:18:59.869 Namespace Attribute Notices: Supported 00:18:59.869 Firmware Activation Notices: Not Supported 00:18:59.869 ANA Change Notices: Not Supported 00:18:59.869 PLE Aggregate Log Change Notices: Not Supported 00:18:59.869 LBA Status Info Alert Notices: Not Supported 00:18:59.869 EGE Aggregate Log Change Notices: Not Supported 00:18:59.869 Normal NVM Subsystem Shutdown event: Not Supported 00:18:59.869 Zone Descriptor Change Notices: Not Supported 00:18:59.869 Discovery Log Change Notices: Not Supported 00:18:59.869 Controller Attributes 00:18:59.869 128-bit Host Identifier: Supported 00:18:59.869 Non-Operational Permissive Mode: Not Supported 00:18:59.869 NVM Sets: Not Supported 00:18:59.869 Read Recovery Levels: Not Supported 00:18:59.869 Endurance Groups: Not Supported 00:18:59.869 Predictable Latency Mode: Not Supported 00:18:59.869 Traffic Based Keep ALive: Not Supported 00:18:59.869 Namespace Granularity: Not Supported 00:18:59.869 SQ Associations: Not Supported 00:18:59.869 UUID List: Not Supported 00:18:59.869 Multi-Domain Subsystem: Not Supported 00:18:59.869 Fixed Capacity Management: Not Supported 00:18:59.869 Variable Capacity Management: Not Supported 00:18:59.869 Delete Endurance Group: Not Supported 00:18:59.869 Delete NVM Set: Not Supported 00:18:59.869 Extended LBA Formats Supported: Not Supported 00:18:59.869 Flexible Data Placement Supported: Not Supported 00:18:59.869 00:18:59.869 Controller Memory Buffer Support 00:18:59.869 ================================ 00:18:59.869 Supported: No 00:18:59.869 00:18:59.869 Persistent Memory Region Support 00:18:59.869 ================================ 00:18:59.869 Supported: No 00:18:59.869 00:18:59.869 Admin Command Set Attributes 00:18:59.869 ============================ 00:18:59.869 Security Send/Receive: Not Supported 00:18:59.869 Format NVM: Not Supported 00:18:59.869 Firmware Activate/Download: Not Supported 00:18:59.869 Namespace Management: Not Supported 00:18:59.869 Device Self-Test: Not Supported 00:18:59.869 Directives: Not Supported 00:18:59.869 NVMe-MI: Not Supported 00:18:59.869 Virtualization Management: Not Supported 00:18:59.869 Doorbell Buffer Config: Not Supported 00:18:59.869 Get LBA Status Capability: Not Supported 00:18:59.869 Command & Feature Lockdown Capability: Not Supported 00:18:59.869 Abort Command Limit: 4 00:18:59.869 Async Event Request Limit: 4 00:18:59.869 Number of Firmware Slots: N/A 00:18:59.869 Firmware Slot 1 Read-Only: N/A 00:18:59.869 Firmware Activation Without Reset: N/A 00:18:59.869 Multiple Update Detection Support: N/A 00:18:59.869 Firmware Update Granularity: No Information Provided 00:18:59.869 Per-Namespace SMART Log: No 00:18:59.869 Asymmetric Namespace Access Log Page: Not Supported 00:18:59.869 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:59.869 Command Effects Log Page: Supported 00:18:59.869 Get Log Page Extended Data: Supported 00:18:59.869 Telemetry Log Pages: Not Supported 00:18:59.869 Persistent Event Log Pages: Not Supported 00:18:59.869 Supported Log Pages Log Page: May Support 00:18:59.869 Commands Supported & Effects Log Page: Not Supported 00:18:59.869 Feature Identifiers & Effects Log Page:May Support 00:18:59.869 NVMe-MI Commands & Effects Log Page: May Support 00:18:59.869 Data Area 4 for Telemetry Log: Not Supported 00:18:59.869 Error Log Page Entries Supported: 128 00:18:59.869 Keep Alive: Supported 00:18:59.869 Keep Alive Granularity: 10000 ms 00:18:59.869 00:18:59.869 NVM Command Set Attributes 00:18:59.869 ========================== 00:18:59.869 Submission Queue Entry Size 00:18:59.869 Max: 64 00:18:59.869 Min: 64 00:18:59.869 Completion Queue Entry Size 00:18:59.869 Max: 16 00:18:59.869 Min: 16 00:18:59.869 Number of Namespaces: 32 00:18:59.869 Compare Command: Supported 00:18:59.869 Write Uncorrectable Command: Not Supported 00:18:59.869 Dataset Management Command: Supported 00:18:59.869 Write Zeroes Command: Supported 00:18:59.869 Set Features Save Field: Not Supported 00:18:59.869 Reservations: Not Supported 00:18:59.869 Timestamp: Not Supported 00:18:59.869 Copy: Supported 00:18:59.869 Volatile Write Cache: Present 00:18:59.869 Atomic Write Unit (Normal): 1 00:18:59.869 Atomic Write Unit (PFail): 1 00:18:59.869 Atomic Compare & Write Unit: 1 00:18:59.869 Fused Compare & Write: Supported 00:18:59.869 Scatter-Gather List 00:18:59.869 SGL Command Set: Supported (Dword aligned) 00:18:59.869 SGL Keyed: Not Supported 00:18:59.869 SGL Bit Bucket Descriptor: Not Supported 00:18:59.869 SGL Metadata Pointer: Not Supported 00:18:59.869 Oversized SGL: Not Supported 00:18:59.869 SGL Metadata Address: Not Supported 00:18:59.869 SGL Offset: Not Supported 00:18:59.869 Transport SGL Data Block: Not Supported 00:18:59.869 Replay Protected Memory Block: Not Supported 00:18:59.869 00:18:59.869 Firmware Slot Information 00:18:59.869 ========================= 00:18:59.869 Active slot: 1 00:18:59.869 Slot 1 Firmware Revision: 25.01 00:18:59.869 00:18:59.869 00:18:59.869 Commands Supported and Effects 00:18:59.869 ============================== 00:18:59.869 Admin Commands 00:18:59.869 -------------- 00:18:59.869 Get Log Page (02h): Supported 00:18:59.869 Identify (06h): Supported 00:18:59.869 Abort (08h): Supported 00:18:59.869 Set Features (09h): Supported 00:18:59.869 Get Features (0Ah): Supported 00:18:59.869 Asynchronous Event Request (0Ch): Supported 00:18:59.869 Keep Alive (18h): Supported 00:18:59.869 I/O Commands 00:18:59.869 ------------ 00:18:59.869 Flush (00h): Supported LBA-Change 00:18:59.869 Write (01h): Supported LBA-Change 00:18:59.869 Read (02h): Supported 00:18:59.869 Compare (05h): Supported 00:18:59.869 Write Zeroes (08h): Supported LBA-Change 00:18:59.869 Dataset Management (09h): Supported LBA-Change 00:18:59.869 Copy (19h): Supported LBA-Change 00:18:59.869 00:18:59.869 Error Log 00:18:59.869 ========= 00:18:59.869 00:18:59.869 Arbitration 00:18:59.869 =========== 00:18:59.869 Arbitration Burst: 1 00:18:59.869 00:18:59.869 Power Management 00:18:59.869 ================ 00:18:59.869 Number of Power States: 1 00:18:59.869 Current Power State: Power State #0 00:18:59.869 Power State #0: 00:18:59.869 Max Power: 0.00 W 00:18:59.869 Non-Operational State: Operational 00:18:59.869 Entry Latency: Not Reported 00:18:59.869 Exit Latency: Not Reported 00:18:59.869 Relative Read Throughput: 0 00:18:59.869 Relative Read Latency: 0 00:18:59.869 Relative Write Throughput: 0 00:18:59.869 Relative Write Latency: 0 00:18:59.869 Idle Power: Not Reported 00:18:59.870 Active Power: Not Reported 00:18:59.870 Non-Operational Permissive Mode: Not Supported 00:18:59.870 00:18:59.870 Health Information 00:18:59.870 ================== 00:18:59.870 Critical Warnings: 00:18:59.870 Available Spare Space: OK 00:18:59.870 Temperature: OK 00:18:59.870 Device Reliability: OK 00:18:59.870 Read Only: No 00:18:59.870 Volatile Memory Backup: OK 00:18:59.870 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:59.870 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:59.870 Available Spare: 0% 00:18:59.870 Available Sp[2024-12-14 03:00:14.785428] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:59.870 [2024-12-14 03:00:14.793316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:59.870 [2024-12-14 03:00:14.793344] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:59.870 [2024-12-14 03:00:14.793352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.870 [2024-12-14 03:00:14.793358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.870 [2024-12-14 03:00:14.793363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.870 [2024-12-14 03:00:14.793368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.870 [2024-12-14 03:00:14.793408] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:59.870 [2024-12-14 03:00:14.793418] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:59.870 [2024-12-14 03:00:14.794411] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:59.870 [2024-12-14 03:00:14.794453] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:59.870 [2024-12-14 03:00:14.794459] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:59.870 [2024-12-14 03:00:14.795417] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:59.870 [2024-12-14 03:00:14.795428] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:59.870 [2024-12-14 03:00:14.795483] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:59.870 [2024-12-14 03:00:14.796439] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:59.870 are Threshold: 0% 00:18:59.870 Life Percentage Used: 0% 00:18:59.870 Data Units Read: 0 00:18:59.870 Data Units Written: 0 00:18:59.870 Host Read Commands: 0 00:18:59.870 Host Write Commands: 0 00:18:59.870 Controller Busy Time: 0 minutes 00:18:59.870 Power Cycles: 0 00:18:59.870 Power On Hours: 0 hours 00:18:59.870 Unsafe Shutdowns: 0 00:18:59.870 Unrecoverable Media Errors: 0 00:18:59.870 Lifetime Error Log Entries: 0 00:18:59.870 Warning Temperature Time: 0 minutes 00:18:59.870 Critical Temperature Time: 0 minutes 00:18:59.870 00:18:59.870 Number of Queues 00:18:59.870 ================ 00:18:59.870 Number of I/O Submission Queues: 127 00:18:59.870 Number of I/O Completion Queues: 127 00:18:59.870 00:18:59.870 Active Namespaces 00:18:59.870 ================= 00:18:59.870 Namespace ID:1 00:18:59.870 Error Recovery Timeout: Unlimited 00:18:59.870 Command Set Identifier: NVM (00h) 00:18:59.870 Deallocate: Supported 00:18:59.870 Deallocated/Unwritten Error: Not Supported 00:18:59.870 Deallocated Read Value: Unknown 00:18:59.870 Deallocate in Write Zeroes: Not Supported 00:18:59.870 Deallocated Guard Field: 0xFFFF 00:18:59.870 Flush: Supported 00:18:59.870 Reservation: Supported 00:18:59.870 Namespace Sharing Capabilities: Multiple Controllers 00:18:59.870 Size (in LBAs): 131072 (0GiB) 00:18:59.870 Capacity (in LBAs): 131072 (0GiB) 00:18:59.870 Utilization (in LBAs): 131072 (0GiB) 00:18:59.870 NGUID: F3C4D9912C194E59A1BEFEBDD91B45A9 00:18:59.870 UUID: f3c4d991-2c19-4e59-a1be-febdd91b45a9 00:18:59.870 Thin Provisioning: Not Supported 00:18:59.870 Per-NS Atomic Units: Yes 00:18:59.870 Atomic Boundary Size (Normal): 0 00:18:59.870 Atomic Boundary Size (PFail): 0 00:18:59.870 Atomic Boundary Offset: 0 00:18:59.870 Maximum Single Source Range Length: 65535 00:18:59.870 Maximum Copy Length: 65535 00:18:59.870 Maximum Source Range Count: 1 00:18:59.870 NGUID/EUI64 Never Reused: No 00:18:59.870 Namespace Write Protected: No 00:18:59.870 Number of LBA Formats: 1 00:18:59.870 Current LBA Format: LBA Format #00 00:18:59.870 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:59.870 00:18:59.870 03:00:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:00.135 [2024-12-14 03:00:15.027523] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:05.439 Initializing NVMe Controllers 00:19:05.439 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:05.439 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:05.439 Initialization complete. Launching workers. 00:19:05.439 ======================================================== 00:19:05.439 Latency(us) 00:19:05.439 Device Information : IOPS MiB/s Average min max 00:19:05.439 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39935.74 156.00 3205.00 967.09 8593.77 00:19:05.439 ======================================================== 00:19:05.439 Total : 39935.74 156.00 3205.00 967.09 8593.77 00:19:05.439 00:19:05.439 [2024-12-14 03:00:20.133578] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:05.439 03:00:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:05.439 [2024-12-14 03:00:20.372276] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:10.871 Initializing NVMe Controllers 00:19:10.871 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:10.871 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:10.871 Initialization complete. Launching workers. 00:19:10.871 ======================================================== 00:19:10.871 Latency(us) 00:19:10.871 Device Information : IOPS MiB/s Average min max 00:19:10.871 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39940.15 156.02 3204.41 1033.58 7552.48 00:19:10.871 ======================================================== 00:19:10.871 Total : 39940.15 156.02 3204.41 1033.58 7552.48 00:19:10.871 00:19:10.871 [2024-12-14 03:00:25.389366] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:10.871 03:00:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:10.871 [2024-12-14 03:00:25.599655] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:16.202 [2024-12-14 03:00:30.741411] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:16.202 Initializing NVMe Controllers 00:19:16.202 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:16.202 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:16.202 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:16.202 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:16.202 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:16.202 Initialization complete. Launching workers. 00:19:16.202 Starting thread on core 2 00:19:16.202 Starting thread on core 3 00:19:16.202 Starting thread on core 1 00:19:16.202 03:00:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:16.203 [2024-12-14 03:00:31.035035] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:19.623 [2024-12-14 03:00:34.099072] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:19.623 Initializing NVMe Controllers 00:19:19.623 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:19.623 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:19.623 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:19.623 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:19.623 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:19.623 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:19.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:19.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:19.623 Initialization complete. Launching workers. 00:19:19.623 Starting thread on core 1 with urgent priority queue 00:19:19.623 Starting thread on core 2 with urgent priority queue 00:19:19.623 Starting thread on core 3 with urgent priority queue 00:19:19.623 Starting thread on core 0 with urgent priority queue 00:19:19.623 SPDK bdev Controller (SPDK2 ) core 0: 8749.00 IO/s 11.43 secs/100000 ios 00:19:19.623 SPDK bdev Controller (SPDK2 ) core 1: 7860.00 IO/s 12.72 secs/100000 ios 00:19:19.623 SPDK bdev Controller (SPDK2 ) core 2: 7866.67 IO/s 12.71 secs/100000 ios 00:19:19.623 SPDK bdev Controller (SPDK2 ) core 3: 8706.67 IO/s 11.49 secs/100000 ios 00:19:19.623 ======================================================== 00:19:19.623 00:19:19.623 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:19.623 [2024-12-14 03:00:34.385753] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:19.624 Initializing NVMe Controllers 00:19:19.624 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:19.624 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:19.624 Namespace ID: 1 size: 0GB 00:19:19.624 Initialization complete. 00:19:19.624 INFO: using host memory buffer for IO 00:19:19.624 Hello world! 00:19:19.624 [2024-12-14 03:00:34.393799] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:19.624 03:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:19.624 [2024-12-14 03:00:34.675694] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:21.040 Initializing NVMe Controllers 00:19:21.040 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:21.040 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:21.040 Initialization complete. Launching workers. 00:19:21.040 submit (in ns) avg, min, max = 7288.7, 3140.0, 4001052.4 00:19:21.040 complete (in ns) avg, min, max = 17948.1, 1758.1, 4033793.3 00:19:21.040 00:19:21.040 Submit histogram 00:19:21.040 ================ 00:19:21.040 Range in us Cumulative Count 00:19:21.040 3.139 - 3.154: 0.0184% ( 3) 00:19:21.040 3.170 - 3.185: 0.0306% ( 2) 00:19:21.040 3.185 - 3.200: 0.3611% ( 54) 00:19:21.040 3.200 - 3.215: 2.1175% ( 287) 00:19:21.040 3.215 - 3.230: 6.3219% ( 687) 00:19:21.040 3.230 - 3.246: 12.3562% ( 986) 00:19:21.040 3.246 - 3.261: 18.8066% ( 1054) 00:19:21.040 3.261 - 3.276: 26.2056% ( 1209) 00:19:21.040 3.276 - 3.291: 33.2436% ( 1150) 00:19:21.040 3.291 - 3.307: 39.5288% ( 1027) 00:19:21.040 3.307 - 3.322: 44.2044% ( 764) 00:19:21.040 3.322 - 3.337: 48.2742% ( 665) 00:19:21.040 3.337 - 3.352: 52.0441% ( 616) 00:19:21.040 3.352 - 3.368: 56.5300% ( 733) 00:19:21.040 3.368 - 3.383: 63.3660% ( 1117) 00:19:21.040 3.383 - 3.398: 69.3207% ( 973) 00:19:21.040 3.398 - 3.413: 75.2387% ( 967) 00:19:21.040 3.413 - 3.429: 80.4529% ( 852) 00:19:21.040 3.429 - 3.444: 83.7760% ( 543) 00:19:21.040 3.444 - 3.459: 85.8262% ( 335) 00:19:21.040 3.459 - 3.474: 86.7013% ( 143) 00:19:21.040 3.474 - 3.490: 87.2827% ( 95) 00:19:21.040 3.490 - 3.505: 87.7479% ( 76) 00:19:21.040 3.505 - 3.520: 88.4027% ( 107) 00:19:21.040 3.520 - 3.535: 89.2717% ( 142) 00:19:21.040 3.535 - 3.550: 90.1102% ( 137) 00:19:21.040 3.550 - 3.566: 91.0404% ( 152) 00:19:21.040 3.566 - 3.581: 92.0318% ( 162) 00:19:21.040 3.581 - 3.596: 92.9315% ( 147) 00:19:21.040 3.596 - 3.611: 93.6965% ( 125) 00:19:21.040 3.611 - 3.627: 94.5716% ( 143) 00:19:21.040 3.627 - 3.642: 95.5936% ( 167) 00:19:21.040 3.642 - 3.657: 96.4749% ( 144) 00:19:21.040 3.657 - 3.672: 97.2399% ( 125) 00:19:21.040 3.672 - 3.688: 97.8519% ( 100) 00:19:21.040 3.688 - 3.703: 98.2925% ( 72) 00:19:21.040 3.703 - 3.718: 98.6781% ( 63) 00:19:21.040 3.718 - 3.733: 99.0208% ( 56) 00:19:21.040 3.733 - 3.749: 99.2289% ( 34) 00:19:21.040 3.749 - 3.764: 99.3758% ( 24) 00:19:21.040 3.764 - 3.779: 99.4859% ( 18) 00:19:21.040 3.779 - 3.794: 99.5349% ( 8) 00:19:21.040 3.794 - 3.810: 99.5532% ( 3) 00:19:21.040 3.810 - 3.825: 99.5655% ( 2) 00:19:21.040 3.825 - 3.840: 99.5777% ( 2) 00:19:21.040 3.870 - 3.886: 99.5900% ( 2) 00:19:21.040 3.901 - 3.931: 99.5961% ( 1) 00:19:21.040 3.931 - 3.962: 99.6083% ( 2) 00:19:21.040 3.962 - 3.992: 99.6144% ( 1) 00:19:21.040 4.328 - 4.358: 99.6206% ( 1) 00:19:21.040 5.303 - 5.333: 99.6267% ( 1) 00:19:21.040 5.425 - 5.455: 99.6328% ( 1) 00:19:21.040 5.455 - 5.486: 99.6389% ( 1) 00:19:21.040 5.547 - 5.577: 99.6450% ( 1) 00:19:21.040 5.699 - 5.730: 99.6573% ( 2) 00:19:21.040 5.882 - 5.912: 99.6695% ( 2) 00:19:21.040 6.278 - 6.309: 99.6756% ( 1) 00:19:21.040 6.400 - 6.430: 99.6818% ( 1) 00:19:21.040 6.552 - 6.583: 99.6879% ( 1) 00:19:21.040 6.644 - 6.674: 99.6940% ( 1) 00:19:21.040 6.735 - 6.766: 99.7062% ( 2) 00:19:21.040 6.827 - 6.857: 99.7185% ( 2) 00:19:21.040 6.857 - 6.888: 99.7246% ( 1) 00:19:21.040 7.040 - 7.070: 99.7307% ( 1) 00:19:21.040 7.070 - 7.101: 99.7368% ( 1) 00:19:21.040 7.253 - 7.284: 99.7552% ( 3) 00:19:21.040 7.345 - 7.375: 99.7613% ( 1) 00:19:21.040 7.375 - 7.406: 99.7736% ( 2) 00:19:21.040 7.528 - 7.558: 99.7797% ( 1) 00:19:21.040 7.650 - 7.680: 99.7858% ( 1) 00:19:21.040 7.680 - 7.710: 99.7919% ( 1) 00:19:21.040 7.741 - 7.771: 99.7980% ( 1) 00:19:21.040 8.107 - 8.168: 99.8103% ( 2) 00:19:21.040 8.168 - 8.229: 99.8164% ( 1) 00:19:21.040 8.229 - 8.290: 99.8225% ( 1) 00:19:21.040 8.290 - 8.350: 99.8348% ( 2) 00:19:21.040 8.777 - 8.838: 99.8470% ( 2) 00:19:21.040 8.838 - 8.899: 99.8531% ( 1) 00:19:21.040 8.899 - 8.960: 99.8654% ( 2) 00:19:21.040 [2024-12-14 03:00:35.769301] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:21.040 9.265 - 9.326: 99.8715% ( 1) 00:19:21.040 9.935 - 9.996: 99.8776% ( 1) 00:19:21.040 15.177 - 15.238: 99.8837% ( 1) 00:19:21.040 19.261 - 19.383: 99.8960% ( 2) 00:19:21.040 19.870 - 19.992: 99.9021% ( 1) 00:19:21.040 3994.575 - 4025.783: 100.0000% ( 16) 00:19:21.040 00:19:21.040 Complete histogram 00:19:21.040 ================== 00:19:21.040 Range in us Cumulative Count 00:19:21.040 1.752 - 1.760: 0.0122% ( 2) 00:19:21.040 1.760 - 1.768: 0.5814% ( 93) 00:19:21.040 1.768 - 1.775: 6.4504% ( 959) 00:19:21.040 1.775 - 1.783: 23.3231% ( 2757) 00:19:21.040 1.783 - 1.790: 39.6512% ( 2668) 00:19:21.040 1.790 - 1.798: 47.0930% ( 1216) 00:19:21.040 1.798 - 1.806: 49.8286% ( 447) 00:19:21.040 1.806 - 1.813: 52.1542% ( 380) 00:19:21.040 1.813 - 1.821: 57.8580% ( 932) 00:19:21.040 1.821 - 1.829: 71.4749% ( 2225) 00:19:21.040 1.829 - 1.836: 84.5471% ( 2136) 00:19:21.040 1.836 - 1.844: 90.8996% ( 1038) 00:19:21.040 1.844 - 1.851: 93.7576% ( 467) 00:19:21.040 1.851 - 1.859: 95.3244% ( 256) 00:19:21.041 1.859 - 1.867: 96.5239% ( 196) 00:19:21.041 1.867 - 1.874: 97.1175% ( 97) 00:19:21.041 1.874 - 1.882: 97.4051% ( 47) 00:19:21.041 1.882 - 1.890: 97.6989% ( 48) 00:19:21.041 1.890 - 1.897: 98.0661% ( 60) 00:19:21.041 1.897 - 1.905: 98.4761% ( 67) 00:19:21.041 1.905 - 1.912: 98.7638% ( 47) 00:19:21.041 1.912 - 1.920: 98.9229% ( 26) 00:19:21.041 1.920 - 1.928: 99.0698% ( 24) 00:19:21.041 1.928 - 1.935: 99.1065% ( 6) 00:19:21.041 1.935 - 1.943: 99.1738% ( 11) 00:19:21.041 1.943 - 1.950: 99.2595% ( 14) 00:19:21.041 1.950 - 1.966: 99.3513% ( 15) 00:19:21.041 1.966 - 1.981: 99.3758% ( 4) 00:19:21.041 1.981 - 1.996: 99.3880% ( 2) 00:19:21.041 1.996 - 2.011: 99.4002% ( 2) 00:19:21.041 2.011 - 2.027: 99.4064% ( 1) 00:19:21.041 2.103 - 2.118: 99.4125% ( 1) 00:19:21.041 2.286 - 2.301: 99.4186% ( 1) 00:19:21.041 2.301 - 2.316: 99.4247% ( 1) 00:19:21.041 2.362 - 2.377: 99.4308% ( 1) 00:19:21.041 3.642 - 3.657: 99.4370% ( 1) 00:19:21.041 3.962 - 3.992: 99.4431% ( 1) 00:19:21.041 4.328 - 4.358: 99.4492% ( 1) 00:19:21.041 4.480 - 4.510: 99.4553% ( 1) 00:19:21.041 4.876 - 4.907: 99.4614% ( 1) 00:19:21.041 4.937 - 4.968: 99.4676% ( 1) 00:19:21.041 4.968 - 4.998: 99.4798% ( 2) 00:19:21.041 5.029 - 5.059: 99.4859% ( 1) 00:19:21.041 5.211 - 5.242: 99.4920% ( 1) 00:19:21.041 5.242 - 5.272: 99.4982% ( 1) 00:19:21.041 5.303 - 5.333: 99.5043% ( 1) 00:19:21.041 5.577 - 5.608: 99.5104% ( 1) 00:19:21.041 6.004 - 6.034: 99.5165% ( 1) 00:19:21.041 6.065 - 6.095: 99.5226% ( 1) 00:19:21.041 6.187 - 6.217: 99.5288% ( 1) 00:19:21.041 6.613 - 6.644: 99.5349% ( 1) 00:19:21.041 6.766 - 6.796: 99.5410% ( 1) 00:19:21.041 6.888 - 6.918: 99.5471% ( 1) 00:19:21.041 6.918 - 6.949: 99.5532% ( 1) 00:19:21.041 6.949 - 6.979: 99.5655% ( 2) 00:19:21.041 7.010 - 7.040: 99.5716% ( 1) 00:19:21.041 7.345 - 7.375: 99.5777% ( 1) 00:19:21.041 7.650 - 7.680: 99.5838% ( 1) 00:19:21.041 8.290 - 8.350: 99.5900% ( 1) 00:19:21.041 15.970 - 16.091: 99.5961% ( 1) 00:19:21.041 3854.141 - 3869.745: 99.6022% ( 1) 00:19:21.041 3994.575 - 4025.783: 99.9939% ( 64) 00:19:21.041 4025.783 - 4056.990: 100.0000% ( 1) 00:19:21.041 00:19:21.041 03:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:21.041 03:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:21.041 03:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:21.041 03:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:21.041 03:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:21.041 [ 00:19:21.041 { 00:19:21.041 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:21.041 "subtype": "Discovery", 00:19:21.041 "listen_addresses": [], 00:19:21.041 "allow_any_host": true, 00:19:21.041 "hosts": [] 00:19:21.041 }, 00:19:21.041 { 00:19:21.041 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:21.041 "subtype": "NVMe", 00:19:21.041 "listen_addresses": [ 00:19:21.041 { 00:19:21.041 "trtype": "VFIOUSER", 00:19:21.041 "adrfam": "IPv4", 00:19:21.041 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:21.041 "trsvcid": "0" 00:19:21.041 } 00:19:21.041 ], 00:19:21.041 "allow_any_host": true, 00:19:21.041 "hosts": [], 00:19:21.041 "serial_number": "SPDK1", 00:19:21.041 "model_number": "SPDK bdev Controller", 00:19:21.041 "max_namespaces": 32, 00:19:21.041 "min_cntlid": 1, 00:19:21.041 "max_cntlid": 65519, 00:19:21.041 "namespaces": [ 00:19:21.041 { 00:19:21.041 "nsid": 1, 00:19:21.041 "bdev_name": "Malloc1", 00:19:21.041 "name": "Malloc1", 00:19:21.041 "nguid": "0649AF08D9E44862A305F138A06CAA7D", 00:19:21.041 "uuid": "0649af08-d9e4-4862-a305-f138a06caa7d" 00:19:21.041 }, 00:19:21.041 { 00:19:21.041 "nsid": 2, 00:19:21.041 "bdev_name": "Malloc3", 00:19:21.041 "name": "Malloc3", 00:19:21.041 "nguid": "E5F9152E2AEE4781BCD71C4885082E67", 00:19:21.041 "uuid": "e5f9152e-2aee-4781-bcd7-1c4885082e67" 00:19:21.041 } 00:19:21.041 ] 00:19:21.041 }, 00:19:21.041 { 00:19:21.041 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:21.041 "subtype": "NVMe", 00:19:21.041 "listen_addresses": [ 00:19:21.041 { 00:19:21.041 "trtype": "VFIOUSER", 00:19:21.041 "adrfam": "IPv4", 00:19:21.041 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:21.041 "trsvcid": "0" 00:19:21.041 } 00:19:21.041 ], 00:19:21.041 "allow_any_host": true, 00:19:21.041 "hosts": [], 00:19:21.041 "serial_number": "SPDK2", 00:19:21.041 "model_number": "SPDK bdev Controller", 00:19:21.041 "max_namespaces": 32, 00:19:21.041 "min_cntlid": 1, 00:19:21.041 "max_cntlid": 65519, 00:19:21.041 "namespaces": [ 00:19:21.041 { 00:19:21.041 "nsid": 1, 00:19:21.041 "bdev_name": "Malloc2", 00:19:21.041 "name": "Malloc2", 00:19:21.041 "nguid": "F3C4D9912C194E59A1BEFEBDD91B45A9", 00:19:21.041 "uuid": "f3c4d991-2c19-4e59-a1be-febdd91b45a9" 00:19:21.041 } 00:19:21.041 ] 00:19:21.041 } 00:19:21.041 ] 00:19:21.041 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:21.041 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:21.041 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=304776 00:19:21.041 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:21.041 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:19:21.041 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:21.041 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:19:21.041 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:19:21.041 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:21.041 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:21.041 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:19:21.041 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:19:21.041 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:21.327 [2024-12-14 03:00:36.171768] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:21.327 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:21.327 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:21.327 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:19:21.327 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:21.327 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:21.327 Malloc4 00:19:21.327 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:21.618 [2024-12-14 03:00:36.622169] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:21.618 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:21.618 Asynchronous Event Request test 00:19:21.618 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:21.618 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:21.618 Registering asynchronous event callbacks... 00:19:21.618 Starting namespace attribute notice tests for all controllers... 00:19:21.618 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:21.618 aer_cb - Changed Namespace 00:19:21.618 Cleaning up... 00:19:21.896 [ 00:19:21.896 { 00:19:21.896 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:21.896 "subtype": "Discovery", 00:19:21.896 "listen_addresses": [], 00:19:21.896 "allow_any_host": true, 00:19:21.896 "hosts": [] 00:19:21.896 }, 00:19:21.896 { 00:19:21.896 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:21.896 "subtype": "NVMe", 00:19:21.896 "listen_addresses": [ 00:19:21.896 { 00:19:21.896 "trtype": "VFIOUSER", 00:19:21.896 "adrfam": "IPv4", 00:19:21.896 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:21.896 "trsvcid": "0" 00:19:21.896 } 00:19:21.896 ], 00:19:21.896 "allow_any_host": true, 00:19:21.896 "hosts": [], 00:19:21.896 "serial_number": "SPDK1", 00:19:21.896 "model_number": "SPDK bdev Controller", 00:19:21.896 "max_namespaces": 32, 00:19:21.896 "min_cntlid": 1, 00:19:21.896 "max_cntlid": 65519, 00:19:21.896 "namespaces": [ 00:19:21.896 { 00:19:21.896 "nsid": 1, 00:19:21.896 "bdev_name": "Malloc1", 00:19:21.896 "name": "Malloc1", 00:19:21.896 "nguid": "0649AF08D9E44862A305F138A06CAA7D", 00:19:21.896 "uuid": "0649af08-d9e4-4862-a305-f138a06caa7d" 00:19:21.896 }, 00:19:21.896 { 00:19:21.896 "nsid": 2, 00:19:21.896 "bdev_name": "Malloc3", 00:19:21.896 "name": "Malloc3", 00:19:21.896 "nguid": "E5F9152E2AEE4781BCD71C4885082E67", 00:19:21.896 "uuid": "e5f9152e-2aee-4781-bcd7-1c4885082e67" 00:19:21.896 } 00:19:21.896 ] 00:19:21.896 }, 00:19:21.896 { 00:19:21.896 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:21.896 "subtype": "NVMe", 00:19:21.896 "listen_addresses": [ 00:19:21.896 { 00:19:21.896 "trtype": "VFIOUSER", 00:19:21.896 "adrfam": "IPv4", 00:19:21.896 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:21.896 "trsvcid": "0" 00:19:21.896 } 00:19:21.896 ], 00:19:21.896 "allow_any_host": true, 00:19:21.896 "hosts": [], 00:19:21.896 "serial_number": "SPDK2", 00:19:21.896 "model_number": "SPDK bdev Controller", 00:19:21.896 "max_namespaces": 32, 00:19:21.896 "min_cntlid": 1, 00:19:21.896 "max_cntlid": 65519, 00:19:21.896 "namespaces": [ 00:19:21.896 { 00:19:21.896 "nsid": 1, 00:19:21.896 "bdev_name": "Malloc2", 00:19:21.896 "name": "Malloc2", 00:19:21.896 "nguid": "F3C4D9912C194E59A1BEFEBDD91B45A9", 00:19:21.896 "uuid": "f3c4d991-2c19-4e59-a1be-febdd91b45a9" 00:19:21.896 }, 00:19:21.896 { 00:19:21.896 "nsid": 2, 00:19:21.896 "bdev_name": "Malloc4", 00:19:21.896 "name": "Malloc4", 00:19:21.896 "nguid": "8E96BAB08EB4465986334D8079A8530A", 00:19:21.896 "uuid": "8e96bab0-8eb4-4659-8633-4d8079a8530a" 00:19:21.896 } 00:19:21.897 ] 00:19:21.897 } 00:19:21.897 ] 00:19:21.897 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 304776 00:19:21.897 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:21.897 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 296664 00:19:21.897 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 296664 ']' 00:19:21.897 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 296664 00:19:21.897 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:21.897 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.897 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 296664 00:19:21.897 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:21.897 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:21.897 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 296664' 00:19:21.897 killing process with pid 296664 00:19:21.897 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 296664 00:19:21.897 03:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 296664 00:19:22.210 03:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:22.210 03:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:22.210 03:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:22.210 03:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:22.210 03:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:22.210 03:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=305015 00:19:22.210 03:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:22.210 03:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 305015' 00:19:22.210 Process pid: 305015 00:19:22.210 03:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:22.210 03:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 305015 00:19:22.210 03:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 305015 ']' 00:19:22.210 03:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.210 03:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.210 03:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.210 03:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.210 03:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:22.210 [2024-12-14 03:00:37.185555] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:22.210 [2024-12-14 03:00:37.186391] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:19:22.210 [2024-12-14 03:00:37.186432] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.210 [2024-12-14 03:00:37.245194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:22.210 [2024-12-14 03:00:37.267904] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.210 [2024-12-14 03:00:37.267938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.210 [2024-12-14 03:00:37.267945] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.210 [2024-12-14 03:00:37.267951] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.210 [2024-12-14 03:00:37.267956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.210 [2024-12-14 03:00:37.270330] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.210 [2024-12-14 03:00:37.270375] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.210 [2024-12-14 03:00:37.270485] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.210 [2024-12-14 03:00:37.270486] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:19:22.507 [2024-12-14 03:00:37.332910] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:22.507 [2024-12-14 03:00:37.333561] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:22.507 [2024-12-14 03:00:37.334231] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:22.507 [2024-12-14 03:00:37.334351] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:22.507 [2024-12-14 03:00:37.334467] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:22.507 03:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.507 03:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:19:22.507 03:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:23.569 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:23.569 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:23.569 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:23.569 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:23.569 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:23.569 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:23.859 Malloc1 00:19:23.859 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:24.153 03:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:24.153 03:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:24.456 03:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:24.456 03:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:24.456 03:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:24.755 Malloc2 00:19:24.755 03:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:24.755 03:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:25.041 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:25.300 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:25.300 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 305015 00:19:25.300 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 305015 ']' 00:19:25.300 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 305015 00:19:25.300 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:25.300 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.300 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 305015 00:19:25.300 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:25.300 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:25.300 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 305015' 00:19:25.300 killing process with pid 305015 00:19:25.300 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 305015 00:19:25.300 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 305015 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:25.559 00:19:25.559 real 0m51.223s 00:19:25.559 user 3m18.650s 00:19:25.559 sys 0m3.154s 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:25.559 ************************************ 00:19:25.559 END TEST nvmf_vfio_user 00:19:25.559 ************************************ 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:25.559 ************************************ 00:19:25.559 START TEST nvmf_vfio_user_nvme_compliance 00:19:25.559 ************************************ 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:25.559 * Looking for test storage... 00:19:25.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:25.559 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:25.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.819 --rc genhtml_branch_coverage=1 00:19:25.819 --rc genhtml_function_coverage=1 00:19:25.819 --rc genhtml_legend=1 00:19:25.819 --rc geninfo_all_blocks=1 00:19:25.819 --rc geninfo_unexecuted_blocks=1 00:19:25.819 00:19:25.819 ' 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:25.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.819 --rc genhtml_branch_coverage=1 00:19:25.819 --rc genhtml_function_coverage=1 00:19:25.819 --rc genhtml_legend=1 00:19:25.819 --rc geninfo_all_blocks=1 00:19:25.819 --rc geninfo_unexecuted_blocks=1 00:19:25.819 00:19:25.819 ' 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:25.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.819 --rc genhtml_branch_coverage=1 00:19:25.819 --rc genhtml_function_coverage=1 00:19:25.819 --rc genhtml_legend=1 00:19:25.819 --rc geninfo_all_blocks=1 00:19:25.819 --rc geninfo_unexecuted_blocks=1 00:19:25.819 00:19:25.819 ' 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:25.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.819 --rc genhtml_branch_coverage=1 00:19:25.819 --rc genhtml_function_coverage=1 00:19:25.819 --rc genhtml_legend=1 00:19:25.819 --rc geninfo_all_blocks=1 00:19:25.819 --rc geninfo_unexecuted_blocks=1 00:19:25.819 00:19:25.819 ' 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:25.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=305575 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 305575' 00:19:25.819 Process pid: 305575 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 305575 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 305575 ']' 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.819 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:25.819 [2024-12-14 03:00:40.791062] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:19:25.819 [2024-12-14 03:00:40.791105] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.819 [2024-12-14 03:00:40.863483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:25.819 [2024-12-14 03:00:40.884863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.819 [2024-12-14 03:00:40.884898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.819 [2024-12-14 03:00:40.884905] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.819 [2024-12-14 03:00:40.884911] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.819 [2024-12-14 03:00:40.884917] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.819 [2024-12-14 03:00:40.886212] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.819 [2024-12-14 03:00:40.886337] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.819 [2024-12-14 03:00:40.886338] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.077 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.077 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:19:26.077 03:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:27.012 03:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:27.012 03:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:27.012 03:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:27.012 03:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.012 03:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:27.012 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.012 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:27.012 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:27.012 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.012 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:27.012 malloc0 00:19:27.012 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.012 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:27.012 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.012 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:27.012 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.012 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:27.012 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.012 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:27.012 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.012 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:27.012 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.012 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:27.012 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.012 03:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:27.270 00:19:27.270 00:19:27.270 CUnit - A unit testing framework for C - Version 2.1-3 00:19:27.270 http://cunit.sourceforge.net/ 00:19:27.270 00:19:27.270 00:19:27.270 Suite: nvme_compliance 00:19:27.270 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-14 03:00:42.231635] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:27.270 [2024-12-14 03:00:42.232954] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:27.270 [2024-12-14 03:00:42.232968] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:27.270 [2024-12-14 03:00:42.232975] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:27.270 [2024-12-14 03:00:42.234654] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:27.270 passed 00:19:27.270 Test: admin_identify_ctrlr_verify_fused ...[2024-12-14 03:00:42.314217] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:27.270 [2024-12-14 03:00:42.319247] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:27.270 passed 00:19:27.270 Test: admin_identify_ns ...[2024-12-14 03:00:42.398836] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:27.529 [2024-12-14 03:00:42.459324] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:27.529 [2024-12-14 03:00:42.467327] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:27.529 [2024-12-14 03:00:42.488420] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:27.529 passed 00:19:27.529 Test: admin_get_features_mandatory_features ...[2024-12-14 03:00:42.563768] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:27.529 [2024-12-14 03:00:42.567792] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:27.529 passed 00:19:27.529 Test: admin_get_features_optional_features ...[2024-12-14 03:00:42.647326] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:27.529 [2024-12-14 03:00:42.651361] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:27.787 passed 00:19:27.787 Test: admin_set_features_number_of_queues ...[2024-12-14 03:00:42.732163] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:27.787 [2024-12-14 03:00:42.837433] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:27.787 passed 00:19:27.787 Test: admin_get_log_page_mandatory_logs ...[2024-12-14 03:00:42.911319] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:27.787 [2024-12-14 03:00:42.914332] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:28.046 passed 00:19:28.046 Test: admin_get_log_page_with_lpo ...[2024-12-14 03:00:42.993667] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:28.046 [2024-12-14 03:00:43.065322] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:28.046 [2024-12-14 03:00:43.078389] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:28.046 passed 00:19:28.046 Test: fabric_property_get ...[2024-12-14 03:00:43.156321] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:28.046 [2024-12-14 03:00:43.157559] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:28.046 [2024-12-14 03:00:43.159348] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:28.305 passed 00:19:28.305 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-14 03:00:43.240948] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:28.305 [2024-12-14 03:00:43.242182] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:28.305 [2024-12-14 03:00:43.243966] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:28.305 passed 00:19:28.305 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-14 03:00:43.320590] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:28.305 [2024-12-14 03:00:43.404322] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:28.305 [2024-12-14 03:00:43.420316] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:28.305 [2024-12-14 03:00:43.425408] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:28.563 passed 00:19:28.563 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-14 03:00:43.501047] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:28.563 [2024-12-14 03:00:43.502281] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:28.563 [2024-12-14 03:00:43.506075] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:28.563 passed 00:19:28.563 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-14 03:00:43.584017] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:28.563 [2024-12-14 03:00:43.658323] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:28.563 [2024-12-14 03:00:43.682320] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:28.563 [2024-12-14 03:00:43.687394] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:28.822 passed 00:19:28.822 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-14 03:00:43.762973] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:28.822 [2024-12-14 03:00:43.764211] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:28.822 [2024-12-14 03:00:43.764237] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:28.822 [2024-12-14 03:00:43.765997] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:28.822 passed 00:19:28.822 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-14 03:00:43.845829] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:28.822 [2024-12-14 03:00:43.938322] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:28.822 [2024-12-14 03:00:43.946319] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:28.822 [2024-12-14 03:00:43.954319] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:29.080 [2024-12-14 03:00:43.962318] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:29.080 [2024-12-14 03:00:43.991409] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:29.080 passed 00:19:29.080 Test: admin_create_io_sq_verify_pc ...[2024-12-14 03:00:44.067129] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:29.080 [2024-12-14 03:00:44.084326] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:29.080 [2024-12-14 03:00:44.102304] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:29.080 passed 00:19:29.080 Test: admin_create_io_qp_max_qps ...[2024-12-14 03:00:44.175810] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.457 [2024-12-14 03:00:45.263321] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:19:30.715 [2024-12-14 03:00:45.658565] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.715 passed 00:19:30.715 Test: admin_create_io_sq_shared_cq ...[2024-12-14 03:00:45.735500] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.974 [2024-12-14 03:00:45.868329] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:30.974 [2024-12-14 03:00:45.905384] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.974 passed 00:19:30.974 00:19:30.974 Run Summary: Type Total Ran Passed Failed Inactive 00:19:30.974 suites 1 1 n/a 0 0 00:19:30.974 tests 18 18 18 0 0 00:19:30.974 asserts 360 360 360 0 n/a 00:19:30.974 00:19:30.974 Elapsed time = 1.513 seconds 00:19:30.974 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 305575 00:19:30.974 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 305575 ']' 00:19:30.974 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 305575 00:19:30.974 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:19:30.974 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.974 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 305575 00:19:30.974 03:00:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:30.974 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:30.974 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 305575' 00:19:30.974 killing process with pid 305575 00:19:30.974 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 305575 00:19:30.974 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 305575 00:19:31.232 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:31.232 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:31.232 00:19:31.232 real 0m5.644s 00:19:31.232 user 0m15.847s 00:19:31.232 sys 0m0.496s 00:19:31.232 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.232 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:31.232 ************************************ 00:19:31.232 END TEST nvmf_vfio_user_nvme_compliance 00:19:31.232 ************************************ 00:19:31.232 03:00:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:31.232 03:00:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:31.232 03:00:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.232 03:00:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:31.232 ************************************ 00:19:31.232 START TEST nvmf_vfio_user_fuzz 00:19:31.232 ************************************ 00:19:31.232 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:31.232 * Looking for test storage... 00:19:31.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:31.233 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:31.233 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:19:31.233 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:31.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.492 --rc genhtml_branch_coverage=1 00:19:31.492 --rc genhtml_function_coverage=1 00:19:31.492 --rc genhtml_legend=1 00:19:31.492 --rc geninfo_all_blocks=1 00:19:31.492 --rc geninfo_unexecuted_blocks=1 00:19:31.492 00:19:31.492 ' 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:31.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.492 --rc genhtml_branch_coverage=1 00:19:31.492 --rc genhtml_function_coverage=1 00:19:31.492 --rc genhtml_legend=1 00:19:31.492 --rc geninfo_all_blocks=1 00:19:31.492 --rc geninfo_unexecuted_blocks=1 00:19:31.492 00:19:31.492 ' 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:31.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.492 --rc genhtml_branch_coverage=1 00:19:31.492 --rc genhtml_function_coverage=1 00:19:31.492 --rc genhtml_legend=1 00:19:31.492 --rc geninfo_all_blocks=1 00:19:31.492 --rc geninfo_unexecuted_blocks=1 00:19:31.492 00:19:31.492 ' 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:31.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.492 --rc genhtml_branch_coverage=1 00:19:31.492 --rc genhtml_function_coverage=1 00:19:31.492 --rc genhtml_legend=1 00:19:31.492 --rc geninfo_all_blocks=1 00:19:31.492 --rc geninfo_unexecuted_blocks=1 00:19:31.492 00:19:31.492 ' 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:31.492 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:31.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=305712 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 305712' 00:19:31.493 Process pid: 305712 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 305712 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 305712 ']' 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.493 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:31.752 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.752 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:19:31.752 03:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:32.689 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:32.689 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.689 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:32.689 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.689 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:32.689 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:32.689 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.689 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:32.689 malloc0 00:19:32.689 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.689 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:32.689 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.689 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:32.689 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.689 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:32.689 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.689 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:32.689 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.689 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:32.689 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.689 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:32.689 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.689 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:32.689 03:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:20:04.773 Fuzzing completed. Shutting down the fuzz application 00:20:04.773 00:20:04.773 Dumping successful admin opcodes: 00:20:04.773 9, 10, 00:20:04.773 Dumping successful io opcodes: 00:20:04.773 0, 00:20:04.773 NS: 0x20000081ef00 I/O qp, Total commands completed: 995668, total successful commands: 3896, random_seed: 2099556160 00:20:04.773 NS: 0x20000081ef00 admin qp, Total commands completed: 240896, total successful commands: 56, random_seed: 819775296 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 305712 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 305712 ']' 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 305712 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 305712 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 305712' 00:20:04.773 killing process with pid 305712 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 305712 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 305712 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:20:04.773 00:20:04.773 real 0m32.150s 00:20:04.773 user 0m29.960s 00:20:04.773 sys 0m30.995s 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:04.773 ************************************ 00:20:04.773 END TEST nvmf_vfio_user_fuzz 00:20:04.773 ************************************ 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:04.773 ************************************ 00:20:04.773 START TEST nvmf_auth_target 00:20:04.773 ************************************ 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:04.773 * Looking for test storage... 00:20:04.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:04.773 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:04.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.774 --rc genhtml_branch_coverage=1 00:20:04.774 --rc genhtml_function_coverage=1 00:20:04.774 --rc genhtml_legend=1 00:20:04.774 --rc geninfo_all_blocks=1 00:20:04.774 --rc geninfo_unexecuted_blocks=1 00:20:04.774 00:20:04.774 ' 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:04.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.774 --rc genhtml_branch_coverage=1 00:20:04.774 --rc genhtml_function_coverage=1 00:20:04.774 --rc genhtml_legend=1 00:20:04.774 --rc geninfo_all_blocks=1 00:20:04.774 --rc geninfo_unexecuted_blocks=1 00:20:04.774 00:20:04.774 ' 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:04.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.774 --rc genhtml_branch_coverage=1 00:20:04.774 --rc genhtml_function_coverage=1 00:20:04.774 --rc genhtml_legend=1 00:20:04.774 --rc geninfo_all_blocks=1 00:20:04.774 --rc geninfo_unexecuted_blocks=1 00:20:04.774 00:20:04.774 ' 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:04.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.774 --rc genhtml_branch_coverage=1 00:20:04.774 --rc genhtml_function_coverage=1 00:20:04.774 --rc genhtml_legend=1 00:20:04.774 --rc geninfo_all_blocks=1 00:20:04.774 --rc geninfo_unexecuted_blocks=1 00:20:04.774 00:20:04.774 ' 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:04.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:04.774 03:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:10.051 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:10.051 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:10.051 Found net devices under 0000:af:00.0: cvl_0_0 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:10.051 Found net devices under 0000:af:00.1: cvl_0_1 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:10.051 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:10.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:10.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:20:10.051 00:20:10.051 --- 10.0.0.2 ping statistics --- 00:20:10.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.051 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:10.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:10.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:20:10.052 00:20:10.052 --- 10.0.0.1 ping statistics --- 00:20:10.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.052 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=308280 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 308280 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 308280 ']' 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=308308 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3f20c760fe7fff16a2afc831099d4290e64979caf06c42ef 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.wwy 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3f20c760fe7fff16a2afc831099d4290e64979caf06c42ef 0 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3f20c760fe7fff16a2afc831099d4290e64979caf06c42ef 0 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3f20c760fe7fff16a2afc831099d4290e64979caf06c42ef 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.wwy 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.wwy 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.wwy 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=771fb33440d8f8e3d18fbec9842395cbd3cce0377bd4f6c868bd929406d81c90 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.T0a 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 771fb33440d8f8e3d18fbec9842395cbd3cce0377bd4f6c868bd929406d81c90 3 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 771fb33440d8f8e3d18fbec9842395cbd3cce0377bd4f6c868bd929406d81c90 3 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=771fb33440d8f8e3d18fbec9842395cbd3cce0377bd4f6c868bd929406d81c90 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.T0a 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.T0a 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.T0a 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f68d8e4ea4eaa3998331605c4da015e4 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.14D 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f68d8e4ea4eaa3998331605c4da015e4 1 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f68d8e4ea4eaa3998331605c4da015e4 1 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f68d8e4ea4eaa3998331605c4da015e4 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:10.052 03:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:10.052 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.14D 00:20:10.052 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.14D 00:20:10.052 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.14D 00:20:10.052 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:10.052 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:10.052 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:10.052 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:10.052 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:10.052 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:10.052 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:10.052 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3f235ba093d65d36a38b6fa99f6642d471a1958220f84d98 00:20:10.052 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:10.052 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Rbb 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3f235ba093d65d36a38b6fa99f6642d471a1958220f84d98 2 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3f235ba093d65d36a38b6fa99f6642d471a1958220f84d98 2 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3f235ba093d65d36a38b6fa99f6642d471a1958220f84d98 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Rbb 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Rbb 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Rbb 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f8870bf700e644e9dd3ff54d2017afc0fbd5999e6740d36c 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.n1o 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f8870bf700e644e9dd3ff54d2017afc0fbd5999e6740d36c 2 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f8870bf700e644e9dd3ff54d2017afc0fbd5999e6740d36c 2 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f8870bf700e644e9dd3ff54d2017afc0fbd5999e6740d36c 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.n1o 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.n1o 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.n1o 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e22e4ddf0642ed4f38595e03560929d3 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.HF0 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e22e4ddf0642ed4f38595e03560929d3 1 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e22e4ddf0642ed4f38595e03560929d3 1 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e22e4ddf0642ed4f38595e03560929d3 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:10.053 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.HF0 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.HF0 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.HF0 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=16afc41f59bc22655d01ab56c7a65b05025601a6edeb1ae599a3d1223698ddbe 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Lul 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 16afc41f59bc22655d01ab56c7a65b05025601a6edeb1ae599a3d1223698ddbe 3 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 16afc41f59bc22655d01ab56c7a65b05025601a6edeb1ae599a3d1223698ddbe 3 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=16afc41f59bc22655d01ab56c7a65b05025601a6edeb1ae599a3d1223698ddbe 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Lul 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Lul 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Lul 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 308280 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 308280 ']' 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.312 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.571 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.571 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:10.571 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 308308 /var/tmp/host.sock 00:20:10.571 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 308308 ']' 00:20:10.571 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:20:10.571 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.571 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:10.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:10.571 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.571 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.571 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.571 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:10.571 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:10.571 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.571 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.571 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.571 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:10.571 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wwy 00:20:10.571 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.571 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.571 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.571 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.wwy 00:20:10.571 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.wwy 00:20:10.829 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.T0a ]] 00:20:10.829 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.T0a 00:20:10.829 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.829 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.829 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.829 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.T0a 00:20:10.829 03:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.T0a 00:20:11.088 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:11.088 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.14D 00:20:11.088 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.088 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.088 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.088 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.14D 00:20:11.088 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.14D 00:20:11.346 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Rbb ]] 00:20:11.346 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Rbb 00:20:11.346 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.346 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.346 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.346 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Rbb 00:20:11.346 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Rbb 00:20:11.346 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:11.346 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.n1o 00:20:11.346 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.346 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.346 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.346 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.n1o 00:20:11.346 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.n1o 00:20:11.608 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.HF0 ]] 00:20:11.608 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HF0 00:20:11.608 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.608 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.608 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.608 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HF0 00:20:11.608 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HF0 00:20:11.868 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:11.868 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Lul 00:20:11.868 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.868 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.868 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.868 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Lul 00:20:11.868 03:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Lul 00:20:12.127 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:12.127 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:12.127 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:12.127 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.127 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:12.127 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:12.385 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:12.385 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.385 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.385 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:12.385 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:12.386 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.386 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.386 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.386 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.386 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.386 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.386 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.386 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.386 00:20:12.644 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.644 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.644 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.644 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.644 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.644 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.644 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.644 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.644 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.644 { 00:20:12.644 "cntlid": 1, 00:20:12.644 "qid": 0, 00:20:12.644 "state": "enabled", 00:20:12.645 "thread": "nvmf_tgt_poll_group_000", 00:20:12.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:12.645 "listen_address": { 00:20:12.645 "trtype": "TCP", 00:20:12.645 "adrfam": "IPv4", 00:20:12.645 "traddr": "10.0.0.2", 00:20:12.645 "trsvcid": "4420" 00:20:12.645 }, 00:20:12.645 "peer_address": { 00:20:12.645 "trtype": "TCP", 00:20:12.645 "adrfam": "IPv4", 00:20:12.645 "traddr": "10.0.0.1", 00:20:12.645 "trsvcid": "36306" 00:20:12.645 }, 00:20:12.645 "auth": { 00:20:12.645 "state": "completed", 00:20:12.645 "digest": "sha256", 00:20:12.645 "dhgroup": "null" 00:20:12.645 } 00:20:12.645 } 00:20:12.645 ]' 00:20:12.645 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.645 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.645 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.903 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:12.903 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.903 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.903 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.903 03:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.162 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:20:13.162 03:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:20:16.446 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.446 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:16.446 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.446 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.446 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.446 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.446 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:16.446 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:16.446 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:16.446 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.446 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:16.446 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:16.446 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:16.446 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.446 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.446 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.446 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.446 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.446 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.446 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.446 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.704 00:20:16.704 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.704 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.704 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.963 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.963 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.963 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.963 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.963 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.963 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.963 { 00:20:16.963 "cntlid": 3, 00:20:16.963 "qid": 0, 00:20:16.963 "state": "enabled", 00:20:16.963 "thread": "nvmf_tgt_poll_group_000", 00:20:16.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:16.963 "listen_address": { 00:20:16.963 "trtype": "TCP", 00:20:16.963 "adrfam": "IPv4", 00:20:16.963 "traddr": "10.0.0.2", 00:20:16.963 "trsvcid": "4420" 00:20:16.963 }, 00:20:16.963 "peer_address": { 00:20:16.963 "trtype": "TCP", 00:20:16.963 "adrfam": "IPv4", 00:20:16.963 "traddr": "10.0.0.1", 00:20:16.963 "trsvcid": "59958" 00:20:16.963 }, 00:20:16.963 "auth": { 00:20:16.963 "state": "completed", 00:20:16.963 "digest": "sha256", 00:20:16.963 "dhgroup": "null" 00:20:16.963 } 00:20:16.963 } 00:20:16.963 ]' 00:20:16.963 03:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.963 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.963 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.963 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:16.963 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.221 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.221 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.221 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.221 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:20:17.221 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:20:17.788 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.788 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:17.788 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.788 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.788 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.788 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.788 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:17.788 03:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:18.047 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:18.047 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.047 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:18.047 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:18.047 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:18.047 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.047 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.047 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.047 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.047 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.047 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.047 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.047 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.307 00:20:18.307 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.307 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.307 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.565 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.565 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.565 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.565 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.565 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.565 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.565 { 00:20:18.565 "cntlid": 5, 00:20:18.565 "qid": 0, 00:20:18.565 "state": "enabled", 00:20:18.566 "thread": "nvmf_tgt_poll_group_000", 00:20:18.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:18.566 "listen_address": { 00:20:18.566 "trtype": "TCP", 00:20:18.566 "adrfam": "IPv4", 00:20:18.566 "traddr": "10.0.0.2", 00:20:18.566 "trsvcid": "4420" 00:20:18.566 }, 00:20:18.566 "peer_address": { 00:20:18.566 "trtype": "TCP", 00:20:18.566 "adrfam": "IPv4", 00:20:18.566 "traddr": "10.0.0.1", 00:20:18.566 "trsvcid": "59976" 00:20:18.566 }, 00:20:18.566 "auth": { 00:20:18.566 "state": "completed", 00:20:18.566 "digest": "sha256", 00:20:18.566 "dhgroup": "null" 00:20:18.566 } 00:20:18.566 } 00:20:18.566 ]' 00:20:18.566 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.566 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.566 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.566 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:18.566 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.566 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.566 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.566 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.824 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:20:18.824 03:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:20:19.392 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.392 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:19.392 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.392 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.392 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.392 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.392 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:19.392 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:19.651 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:19.651 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.651 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:19.651 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:19.651 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:19.651 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.651 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:19.651 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.651 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.651 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.651 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:19.652 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:19.652 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:19.910 00:20:19.910 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.910 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.910 03:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.170 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.170 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.170 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.170 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.170 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.170 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.170 { 00:20:20.170 "cntlid": 7, 00:20:20.170 "qid": 0, 00:20:20.170 "state": "enabled", 00:20:20.170 "thread": "nvmf_tgt_poll_group_000", 00:20:20.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:20.170 "listen_address": { 00:20:20.170 "trtype": "TCP", 00:20:20.170 "adrfam": "IPv4", 00:20:20.170 "traddr": "10.0.0.2", 00:20:20.170 "trsvcid": "4420" 00:20:20.170 }, 00:20:20.170 "peer_address": { 00:20:20.170 "trtype": "TCP", 00:20:20.170 "adrfam": "IPv4", 00:20:20.170 "traddr": "10.0.0.1", 00:20:20.170 "trsvcid": "60014" 00:20:20.170 }, 00:20:20.170 "auth": { 00:20:20.170 "state": "completed", 00:20:20.170 "digest": "sha256", 00:20:20.170 "dhgroup": "null" 00:20:20.170 } 00:20:20.170 } 00:20:20.170 ]' 00:20:20.170 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.170 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.170 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.170 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:20.171 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.171 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.171 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.171 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.430 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:20:20.430 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:20:20.997 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.997 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:20.997 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.997 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.997 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.997 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.997 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.997 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:20.997 03:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:21.256 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:21.256 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.256 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:21.256 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:21.256 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:21.256 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.256 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.256 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.256 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.256 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.256 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.256 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.256 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.514 00:20:21.514 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.515 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.515 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.515 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.515 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.515 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.515 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.774 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.774 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.774 { 00:20:21.774 "cntlid": 9, 00:20:21.774 "qid": 0, 00:20:21.774 "state": "enabled", 00:20:21.774 "thread": "nvmf_tgt_poll_group_000", 00:20:21.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:21.774 "listen_address": { 00:20:21.774 "trtype": "TCP", 00:20:21.774 "adrfam": "IPv4", 00:20:21.774 "traddr": "10.0.0.2", 00:20:21.774 "trsvcid": "4420" 00:20:21.774 }, 00:20:21.774 "peer_address": { 00:20:21.774 "trtype": "TCP", 00:20:21.774 "adrfam": "IPv4", 00:20:21.774 "traddr": "10.0.0.1", 00:20:21.774 "trsvcid": "60042" 00:20:21.774 }, 00:20:21.774 "auth": { 00:20:21.774 "state": "completed", 00:20:21.774 "digest": "sha256", 00:20:21.774 "dhgroup": "ffdhe2048" 00:20:21.774 } 00:20:21.774 } 00:20:21.774 ]' 00:20:21.774 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.774 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.774 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.774 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:21.774 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.774 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.774 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.774 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.033 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:20:22.033 03:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:20:22.600 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.600 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:22.600 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.600 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.600 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.600 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.600 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:22.600 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:22.859 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:22.859 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.859 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:22.859 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:22.859 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:22.859 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.859 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.859 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.859 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.859 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.859 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.859 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.859 03:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.118 00:20:23.118 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.118 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.118 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.118 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.118 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.118 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.118 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.118 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.118 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.118 { 00:20:23.118 "cntlid": 11, 00:20:23.118 "qid": 0, 00:20:23.118 "state": "enabled", 00:20:23.118 "thread": "nvmf_tgt_poll_group_000", 00:20:23.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:23.118 "listen_address": { 00:20:23.118 "trtype": "TCP", 00:20:23.118 "adrfam": "IPv4", 00:20:23.118 "traddr": "10.0.0.2", 00:20:23.118 "trsvcid": "4420" 00:20:23.118 }, 00:20:23.118 "peer_address": { 00:20:23.118 "trtype": "TCP", 00:20:23.118 "adrfam": "IPv4", 00:20:23.118 "traddr": "10.0.0.1", 00:20:23.118 "trsvcid": "60068" 00:20:23.118 }, 00:20:23.118 "auth": { 00:20:23.118 "state": "completed", 00:20:23.118 "digest": "sha256", 00:20:23.118 "dhgroup": "ffdhe2048" 00:20:23.118 } 00:20:23.118 } 00:20:23.118 ]' 00:20:23.118 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.377 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.377 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.377 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:23.377 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.377 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.377 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.377 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.635 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:20:23.635 03:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:20:24.203 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.203 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:24.203 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.203 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.203 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.203 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.203 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:24.203 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:24.203 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:24.203 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.203 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:24.203 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:24.203 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:24.203 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.203 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.203 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.203 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.462 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.462 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.462 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.462 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.462 00:20:24.721 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.721 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.721 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.721 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.721 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.721 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.721 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.721 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.721 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.721 { 00:20:24.721 "cntlid": 13, 00:20:24.721 "qid": 0, 00:20:24.721 "state": "enabled", 00:20:24.721 "thread": "nvmf_tgt_poll_group_000", 00:20:24.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:24.721 "listen_address": { 00:20:24.721 "trtype": "TCP", 00:20:24.721 "adrfam": "IPv4", 00:20:24.721 "traddr": "10.0.0.2", 00:20:24.721 "trsvcid": "4420" 00:20:24.721 }, 00:20:24.721 "peer_address": { 00:20:24.721 "trtype": "TCP", 00:20:24.721 "adrfam": "IPv4", 00:20:24.721 "traddr": "10.0.0.1", 00:20:24.721 "trsvcid": "60104" 00:20:24.721 }, 00:20:24.721 "auth": { 00:20:24.721 "state": "completed", 00:20:24.721 "digest": "sha256", 00:20:24.721 "dhgroup": "ffdhe2048" 00:20:24.721 } 00:20:24.721 } 00:20:24.721 ]' 00:20:24.721 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.721 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.721 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.980 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:24.980 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.980 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.980 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.980 03:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.239 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:20:25.239 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:20:25.806 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.807 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:25.807 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.807 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.807 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.807 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.807 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:25.807 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:25.807 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:25.807 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.807 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:25.807 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:25.807 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:25.807 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.807 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:25.807 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.807 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.807 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.807 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:25.807 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.807 03:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:26.065 00:20:26.065 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.065 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.065 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.323 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.323 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.323 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.323 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.323 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.323 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.323 { 00:20:26.323 "cntlid": 15, 00:20:26.323 "qid": 0, 00:20:26.323 "state": "enabled", 00:20:26.323 "thread": "nvmf_tgt_poll_group_000", 00:20:26.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:26.323 "listen_address": { 00:20:26.323 "trtype": "TCP", 00:20:26.323 "adrfam": "IPv4", 00:20:26.323 "traddr": "10.0.0.2", 00:20:26.323 "trsvcid": "4420" 00:20:26.323 }, 00:20:26.323 "peer_address": { 00:20:26.323 "trtype": "TCP", 00:20:26.323 "adrfam": "IPv4", 00:20:26.323 "traddr": "10.0.0.1", 00:20:26.323 "trsvcid": "41602" 00:20:26.323 }, 00:20:26.323 "auth": { 00:20:26.323 "state": "completed", 00:20:26.323 "digest": "sha256", 00:20:26.323 "dhgroup": "ffdhe2048" 00:20:26.323 } 00:20:26.323 } 00:20:26.323 ]' 00:20:26.323 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.323 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.323 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.581 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:26.581 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.581 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.581 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.581 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.582 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:20:26.582 03:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:20:27.149 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.149 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:27.149 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.149 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.149 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.149 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.149 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.149 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:27.149 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:27.407 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:27.408 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.408 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:27.408 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:27.408 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:27.408 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.408 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.408 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.408 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.408 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.408 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.408 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.408 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.666 00:20:27.666 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.666 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.666 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.925 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.925 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.925 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.925 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.925 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.925 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.925 { 00:20:27.925 "cntlid": 17, 00:20:27.925 "qid": 0, 00:20:27.925 "state": "enabled", 00:20:27.925 "thread": "nvmf_tgt_poll_group_000", 00:20:27.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:27.925 "listen_address": { 00:20:27.925 "trtype": "TCP", 00:20:27.925 "adrfam": "IPv4", 00:20:27.925 "traddr": "10.0.0.2", 00:20:27.925 "trsvcid": "4420" 00:20:27.925 }, 00:20:27.925 "peer_address": { 00:20:27.925 "trtype": "TCP", 00:20:27.925 "adrfam": "IPv4", 00:20:27.925 "traddr": "10.0.0.1", 00:20:27.925 "trsvcid": "41634" 00:20:27.925 }, 00:20:27.925 "auth": { 00:20:27.925 "state": "completed", 00:20:27.925 "digest": "sha256", 00:20:27.925 "dhgroup": "ffdhe3072" 00:20:27.925 } 00:20:27.925 } 00:20:27.925 ]' 00:20:27.925 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.925 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.925 03:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.925 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:27.925 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.183 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.183 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.183 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.183 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:20:28.184 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:20:28.751 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.751 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:28.751 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.751 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.751 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.751 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.751 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:28.751 03:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:29.010 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:29.010 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.010 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:29.010 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:29.010 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:29.010 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.010 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.010 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.010 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.010 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.010 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.010 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.010 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.269 00:20:29.269 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.269 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.269 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.527 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.527 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.527 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.527 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.527 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.527 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.527 { 00:20:29.527 "cntlid": 19, 00:20:29.527 "qid": 0, 00:20:29.527 "state": "enabled", 00:20:29.527 "thread": "nvmf_tgt_poll_group_000", 00:20:29.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:29.527 "listen_address": { 00:20:29.527 "trtype": "TCP", 00:20:29.527 "adrfam": "IPv4", 00:20:29.527 "traddr": "10.0.0.2", 00:20:29.527 "trsvcid": "4420" 00:20:29.527 }, 00:20:29.527 "peer_address": { 00:20:29.527 "trtype": "TCP", 00:20:29.527 "adrfam": "IPv4", 00:20:29.527 "traddr": "10.0.0.1", 00:20:29.527 "trsvcid": "41648" 00:20:29.527 }, 00:20:29.527 "auth": { 00:20:29.527 "state": "completed", 00:20:29.527 "digest": "sha256", 00:20:29.527 "dhgroup": "ffdhe3072" 00:20:29.527 } 00:20:29.527 } 00:20:29.527 ]' 00:20:29.527 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.527 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:29.527 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.527 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:29.527 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.527 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.527 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.527 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.786 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:20:29.786 03:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:20:30.352 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.352 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:30.352 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.352 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.352 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.352 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.352 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:30.352 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:30.610 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:30.610 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.610 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:30.610 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:30.610 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:30.610 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.610 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.610 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.610 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.610 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.610 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.610 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.610 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.868 00:20:30.868 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.868 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.868 03:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.126 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.126 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.126 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.126 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.126 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.126 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.126 { 00:20:31.126 "cntlid": 21, 00:20:31.126 "qid": 0, 00:20:31.126 "state": "enabled", 00:20:31.126 "thread": "nvmf_tgt_poll_group_000", 00:20:31.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:31.126 "listen_address": { 00:20:31.126 "trtype": "TCP", 00:20:31.126 "adrfam": "IPv4", 00:20:31.126 "traddr": "10.0.0.2", 00:20:31.126 "trsvcid": "4420" 00:20:31.126 }, 00:20:31.126 "peer_address": { 00:20:31.126 "trtype": "TCP", 00:20:31.126 "adrfam": "IPv4", 00:20:31.126 "traddr": "10.0.0.1", 00:20:31.126 "trsvcid": "41662" 00:20:31.126 }, 00:20:31.126 "auth": { 00:20:31.126 "state": "completed", 00:20:31.126 "digest": "sha256", 00:20:31.126 "dhgroup": "ffdhe3072" 00:20:31.126 } 00:20:31.126 } 00:20:31.126 ]' 00:20:31.126 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.126 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:31.127 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.127 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:31.127 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.127 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.127 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.127 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.386 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:20:31.386 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:20:31.954 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.954 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:31.954 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.954 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.954 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.954 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.954 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:31.954 03:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:32.212 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:32.212 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.212 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:32.212 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:32.212 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:32.212 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.212 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:32.212 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.212 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.212 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.212 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:32.212 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.212 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.470 00:20:32.470 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.470 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.470 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.728 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.728 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.728 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.728 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.728 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.728 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.728 { 00:20:32.728 "cntlid": 23, 00:20:32.728 "qid": 0, 00:20:32.728 "state": "enabled", 00:20:32.729 "thread": "nvmf_tgt_poll_group_000", 00:20:32.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:32.729 "listen_address": { 00:20:32.729 "trtype": "TCP", 00:20:32.729 "adrfam": "IPv4", 00:20:32.729 "traddr": "10.0.0.2", 00:20:32.729 "trsvcid": "4420" 00:20:32.729 }, 00:20:32.729 "peer_address": { 00:20:32.729 "trtype": "TCP", 00:20:32.729 "adrfam": "IPv4", 00:20:32.729 "traddr": "10.0.0.1", 00:20:32.729 "trsvcid": "41676" 00:20:32.729 }, 00:20:32.729 "auth": { 00:20:32.729 "state": "completed", 00:20:32.729 "digest": "sha256", 00:20:32.729 "dhgroup": "ffdhe3072" 00:20:32.729 } 00:20:32.729 } 00:20:32.729 ]' 00:20:32.729 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.729 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.729 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.729 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:32.729 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.729 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.729 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.729 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.986 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:20:32.986 03:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:20:33.551 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.551 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:33.551 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.551 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.551 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.551 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.551 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.551 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:33.551 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:33.810 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:33.810 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.810 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:33.810 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:33.810 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:33.810 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.810 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.810 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.810 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.810 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.810 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.810 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.810 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.069 00:20:34.069 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.069 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.069 03:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.069 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.069 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.069 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.069 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.328 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.328 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.328 { 00:20:34.328 "cntlid": 25, 00:20:34.328 "qid": 0, 00:20:34.328 "state": "enabled", 00:20:34.328 "thread": "nvmf_tgt_poll_group_000", 00:20:34.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:34.328 "listen_address": { 00:20:34.328 "trtype": "TCP", 00:20:34.328 "adrfam": "IPv4", 00:20:34.328 "traddr": "10.0.0.2", 00:20:34.328 "trsvcid": "4420" 00:20:34.328 }, 00:20:34.328 "peer_address": { 00:20:34.328 "trtype": "TCP", 00:20:34.328 "adrfam": "IPv4", 00:20:34.328 "traddr": "10.0.0.1", 00:20:34.328 "trsvcid": "41694" 00:20:34.328 }, 00:20:34.328 "auth": { 00:20:34.328 "state": "completed", 00:20:34.328 "digest": "sha256", 00:20:34.328 "dhgroup": "ffdhe4096" 00:20:34.328 } 00:20:34.328 } 00:20:34.328 ]' 00:20:34.328 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.328 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.328 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.328 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:34.328 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.328 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.328 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.328 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.587 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:20:34.587 03:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:20:35.154 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.154 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:35.154 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.154 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.154 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.154 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.154 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:35.154 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:35.412 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:35.412 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.412 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:35.412 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:35.412 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:35.412 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.412 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.412 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.412 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.412 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.412 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.412 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.412 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.671 00:20:35.671 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.671 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.671 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.929 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.929 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.929 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.929 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.929 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.929 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.929 { 00:20:35.929 "cntlid": 27, 00:20:35.929 "qid": 0, 00:20:35.929 "state": "enabled", 00:20:35.929 "thread": "nvmf_tgt_poll_group_000", 00:20:35.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:35.929 "listen_address": { 00:20:35.929 "trtype": "TCP", 00:20:35.929 "adrfam": "IPv4", 00:20:35.929 "traddr": "10.0.0.2", 00:20:35.929 "trsvcid": "4420" 00:20:35.929 }, 00:20:35.929 "peer_address": { 00:20:35.929 "trtype": "TCP", 00:20:35.929 "adrfam": "IPv4", 00:20:35.929 "traddr": "10.0.0.1", 00:20:35.929 "trsvcid": "51950" 00:20:35.929 }, 00:20:35.929 "auth": { 00:20:35.929 "state": "completed", 00:20:35.929 "digest": "sha256", 00:20:35.929 "dhgroup": "ffdhe4096" 00:20:35.929 } 00:20:35.929 } 00:20:35.929 ]' 00:20:35.929 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.929 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:35.929 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.929 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:35.929 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.929 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.929 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.929 03:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.188 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:20:36.188 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:20:36.754 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.754 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:36.754 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.754 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.754 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.754 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.754 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:36.754 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:37.012 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:37.012 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.012 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:37.012 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:37.013 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:37.013 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.013 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.013 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.013 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.013 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.013 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.013 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.013 03:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.271 00:20:37.271 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.271 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.271 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.529 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.529 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.529 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.529 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.529 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.529 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.529 { 00:20:37.529 "cntlid": 29, 00:20:37.529 "qid": 0, 00:20:37.529 "state": "enabled", 00:20:37.529 "thread": "nvmf_tgt_poll_group_000", 00:20:37.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:37.529 "listen_address": { 00:20:37.529 "trtype": "TCP", 00:20:37.529 "adrfam": "IPv4", 00:20:37.529 "traddr": "10.0.0.2", 00:20:37.529 "trsvcid": "4420" 00:20:37.529 }, 00:20:37.529 "peer_address": { 00:20:37.529 "trtype": "TCP", 00:20:37.529 "adrfam": "IPv4", 00:20:37.529 "traddr": "10.0.0.1", 00:20:37.529 "trsvcid": "51978" 00:20:37.529 }, 00:20:37.529 "auth": { 00:20:37.529 "state": "completed", 00:20:37.529 "digest": "sha256", 00:20:37.529 "dhgroup": "ffdhe4096" 00:20:37.529 } 00:20:37.529 } 00:20:37.529 ]' 00:20:37.529 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.529 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.529 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.529 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.529 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.529 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.529 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.529 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.788 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:20:37.788 03:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:20:38.354 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.354 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:38.354 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.354 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.354 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.354 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.354 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:38.354 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:38.613 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:38.613 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.613 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:38.613 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:38.613 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:38.613 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.613 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:38.613 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.613 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.613 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.613 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:38.613 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.613 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.871 00:20:38.871 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.871 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.871 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.871 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.871 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.871 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.871 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.871 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.871 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.871 { 00:20:38.871 "cntlid": 31, 00:20:38.871 "qid": 0, 00:20:38.871 "state": "enabled", 00:20:38.871 "thread": "nvmf_tgt_poll_group_000", 00:20:38.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:38.871 "listen_address": { 00:20:38.871 "trtype": "TCP", 00:20:38.871 "adrfam": "IPv4", 00:20:38.871 "traddr": "10.0.0.2", 00:20:38.871 "trsvcid": "4420" 00:20:38.871 }, 00:20:38.871 "peer_address": { 00:20:38.871 "trtype": "TCP", 00:20:38.871 "adrfam": "IPv4", 00:20:38.871 "traddr": "10.0.0.1", 00:20:38.871 "trsvcid": "52006" 00:20:38.871 }, 00:20:38.871 "auth": { 00:20:38.871 "state": "completed", 00:20:38.871 "digest": "sha256", 00:20:38.871 "dhgroup": "ffdhe4096" 00:20:38.871 } 00:20:38.871 } 00:20:38.871 ]' 00:20:38.871 03:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.129 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:39.129 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.129 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:39.129 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.129 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.129 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.129 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.387 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:20:39.387 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:20:39.954 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.954 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:39.954 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.954 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.954 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.954 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.954 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.954 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:39.955 03:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:39.955 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:39.955 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.955 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:39.955 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:39.955 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:39.955 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.955 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.955 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.955 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.955 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.955 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.955 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.955 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.519 00:20:40.519 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.519 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.519 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.519 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.519 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.519 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.519 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.519 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.519 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.519 { 00:20:40.519 "cntlid": 33, 00:20:40.519 "qid": 0, 00:20:40.519 "state": "enabled", 00:20:40.519 "thread": "nvmf_tgt_poll_group_000", 00:20:40.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:40.519 "listen_address": { 00:20:40.519 "trtype": "TCP", 00:20:40.519 "adrfam": "IPv4", 00:20:40.519 "traddr": "10.0.0.2", 00:20:40.519 "trsvcid": "4420" 00:20:40.519 }, 00:20:40.519 "peer_address": { 00:20:40.519 "trtype": "TCP", 00:20:40.519 "adrfam": "IPv4", 00:20:40.519 "traddr": "10.0.0.1", 00:20:40.519 "trsvcid": "52030" 00:20:40.519 }, 00:20:40.519 "auth": { 00:20:40.519 "state": "completed", 00:20:40.519 "digest": "sha256", 00:20:40.519 "dhgroup": "ffdhe6144" 00:20:40.519 } 00:20:40.519 } 00:20:40.519 ]' 00:20:40.519 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.777 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:40.777 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.777 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:40.777 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.777 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.777 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.777 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.035 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:20:41.036 03:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:20:41.603 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.603 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:41.603 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.603 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.603 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.603 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.603 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:41.603 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:41.603 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:41.603 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.603 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:41.603 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:41.603 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:41.603 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.603 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.603 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.603 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.603 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.603 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.603 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.603 03:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.170 00:20:42.170 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.170 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.170 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.170 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.170 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.170 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.170 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.170 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.170 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.170 { 00:20:42.170 "cntlid": 35, 00:20:42.170 "qid": 0, 00:20:42.170 "state": "enabled", 00:20:42.170 "thread": "nvmf_tgt_poll_group_000", 00:20:42.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:42.170 "listen_address": { 00:20:42.170 "trtype": "TCP", 00:20:42.170 "adrfam": "IPv4", 00:20:42.170 "traddr": "10.0.0.2", 00:20:42.170 "trsvcid": "4420" 00:20:42.170 }, 00:20:42.170 "peer_address": { 00:20:42.170 "trtype": "TCP", 00:20:42.170 "adrfam": "IPv4", 00:20:42.170 "traddr": "10.0.0.1", 00:20:42.170 "trsvcid": "52060" 00:20:42.170 }, 00:20:42.170 "auth": { 00:20:42.170 "state": "completed", 00:20:42.170 "digest": "sha256", 00:20:42.170 "dhgroup": "ffdhe6144" 00:20:42.170 } 00:20:42.170 } 00:20:42.170 ]' 00:20:42.170 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.429 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:42.429 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.429 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.429 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.429 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.429 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.429 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.687 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:20:42.687 03:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:20:43.254 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.254 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:43.254 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.254 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.254 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.254 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.254 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:43.254 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:43.254 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:43.254 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.254 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:43.254 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:43.254 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:43.254 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.254 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.254 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.254 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.254 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.254 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.254 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.254 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.821 00:20:43.821 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.821 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.821 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.821 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.821 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.821 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.821 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.821 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.821 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.821 { 00:20:43.821 "cntlid": 37, 00:20:43.821 "qid": 0, 00:20:43.821 "state": "enabled", 00:20:43.821 "thread": "nvmf_tgt_poll_group_000", 00:20:43.821 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:43.821 "listen_address": { 00:20:43.821 "trtype": "TCP", 00:20:43.821 "adrfam": "IPv4", 00:20:43.821 "traddr": "10.0.0.2", 00:20:43.821 "trsvcid": "4420" 00:20:43.821 }, 00:20:43.821 "peer_address": { 00:20:43.821 "trtype": "TCP", 00:20:43.821 "adrfam": "IPv4", 00:20:43.821 "traddr": "10.0.0.1", 00:20:43.821 "trsvcid": "52092" 00:20:43.821 }, 00:20:43.821 "auth": { 00:20:43.821 "state": "completed", 00:20:43.821 "digest": "sha256", 00:20:43.821 "dhgroup": "ffdhe6144" 00:20:43.821 } 00:20:43.821 } 00:20:43.821 ]' 00:20:43.821 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.080 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:44.080 03:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.080 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:44.080 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.080 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.080 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.080 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.338 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:20:44.338 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:20:44.905 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.905 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:44.905 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.905 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.905 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.905 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.905 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:44.905 03:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:45.163 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:45.163 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.163 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:45.163 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:45.163 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:45.163 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.163 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:45.163 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.163 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.163 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.163 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:45.163 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.163 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.422 00:20:45.422 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.422 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.422 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.680 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.680 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.680 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.680 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.680 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.680 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.680 { 00:20:45.680 "cntlid": 39, 00:20:45.680 "qid": 0, 00:20:45.680 "state": "enabled", 00:20:45.680 "thread": "nvmf_tgt_poll_group_000", 00:20:45.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:45.680 "listen_address": { 00:20:45.680 "trtype": "TCP", 00:20:45.680 "adrfam": "IPv4", 00:20:45.680 "traddr": "10.0.0.2", 00:20:45.680 "trsvcid": "4420" 00:20:45.680 }, 00:20:45.680 "peer_address": { 00:20:45.680 "trtype": "TCP", 00:20:45.680 "adrfam": "IPv4", 00:20:45.680 "traddr": "10.0.0.1", 00:20:45.680 "trsvcid": "53190" 00:20:45.680 }, 00:20:45.680 "auth": { 00:20:45.680 "state": "completed", 00:20:45.680 "digest": "sha256", 00:20:45.680 "dhgroup": "ffdhe6144" 00:20:45.680 } 00:20:45.680 } 00:20:45.680 ]' 00:20:45.680 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.680 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:45.680 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.680 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:45.680 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.681 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.681 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.681 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.939 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:20:45.939 03:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:20:46.505 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.505 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:46.505 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.505 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.505 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.505 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.505 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.505 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:46.505 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:46.763 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:46.763 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.763 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:46.763 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:46.763 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:46.763 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.764 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.764 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.764 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.764 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.764 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.764 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.764 03:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.331 00:20:47.331 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.331 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.331 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.331 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.331 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.331 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.331 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.331 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.331 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.331 { 00:20:47.331 "cntlid": 41, 00:20:47.331 "qid": 0, 00:20:47.331 "state": "enabled", 00:20:47.331 "thread": "nvmf_tgt_poll_group_000", 00:20:47.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:47.331 "listen_address": { 00:20:47.331 "trtype": "TCP", 00:20:47.331 "adrfam": "IPv4", 00:20:47.331 "traddr": "10.0.0.2", 00:20:47.331 "trsvcid": "4420" 00:20:47.331 }, 00:20:47.331 "peer_address": { 00:20:47.331 "trtype": "TCP", 00:20:47.331 "adrfam": "IPv4", 00:20:47.331 "traddr": "10.0.0.1", 00:20:47.331 "trsvcid": "53228" 00:20:47.331 }, 00:20:47.331 "auth": { 00:20:47.331 "state": "completed", 00:20:47.331 "digest": "sha256", 00:20:47.331 "dhgroup": "ffdhe8192" 00:20:47.331 } 00:20:47.331 } 00:20:47.331 ]' 00:20:47.331 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.589 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:47.589 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.589 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:47.589 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.589 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.589 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.589 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.848 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:20:47.848 03:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:20:48.414 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.414 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:48.414 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.414 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.414 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.414 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.414 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:48.414 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:48.414 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:48.414 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.414 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:48.414 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:48.414 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:48.414 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.414 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.414 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.414 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.673 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.673 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.673 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.673 03:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.931 00:20:48.931 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.931 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.931 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.190 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.190 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.190 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.190 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.190 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.190 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.190 { 00:20:49.190 "cntlid": 43, 00:20:49.190 "qid": 0, 00:20:49.190 "state": "enabled", 00:20:49.190 "thread": "nvmf_tgt_poll_group_000", 00:20:49.190 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:49.190 "listen_address": { 00:20:49.190 "trtype": "TCP", 00:20:49.190 "adrfam": "IPv4", 00:20:49.190 "traddr": "10.0.0.2", 00:20:49.190 "trsvcid": "4420" 00:20:49.190 }, 00:20:49.190 "peer_address": { 00:20:49.190 "trtype": "TCP", 00:20:49.190 "adrfam": "IPv4", 00:20:49.190 "traddr": "10.0.0.1", 00:20:49.190 "trsvcid": "53256" 00:20:49.190 }, 00:20:49.190 "auth": { 00:20:49.190 "state": "completed", 00:20:49.190 "digest": "sha256", 00:20:49.190 "dhgroup": "ffdhe8192" 00:20:49.190 } 00:20:49.190 } 00:20:49.190 ]' 00:20:49.190 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.190 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:49.190 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.190 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:49.190 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.448 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.448 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.448 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.448 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:20:49.448 03:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:20:50.015 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.015 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:50.015 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.015 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.015 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.015 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.015 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:50.015 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:50.272 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:50.272 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.272 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:50.272 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:50.272 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:50.272 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.273 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.273 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.273 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.273 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.273 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.273 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.273 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.838 00:20:50.838 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.838 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.838 03:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.097 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.097 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.097 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.097 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.097 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.097 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.097 { 00:20:51.097 "cntlid": 45, 00:20:51.097 "qid": 0, 00:20:51.097 "state": "enabled", 00:20:51.097 "thread": "nvmf_tgt_poll_group_000", 00:20:51.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:51.097 "listen_address": { 00:20:51.097 "trtype": "TCP", 00:20:51.097 "adrfam": "IPv4", 00:20:51.097 "traddr": "10.0.0.2", 00:20:51.097 "trsvcid": "4420" 00:20:51.097 }, 00:20:51.097 "peer_address": { 00:20:51.097 "trtype": "TCP", 00:20:51.097 "adrfam": "IPv4", 00:20:51.097 "traddr": "10.0.0.1", 00:20:51.097 "trsvcid": "53280" 00:20:51.097 }, 00:20:51.097 "auth": { 00:20:51.097 "state": "completed", 00:20:51.097 "digest": "sha256", 00:20:51.097 "dhgroup": "ffdhe8192" 00:20:51.097 } 00:20:51.097 } 00:20:51.097 ]' 00:20:51.097 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.097 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:51.097 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.097 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:51.097 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.097 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.097 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.097 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.356 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:20:51.356 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:20:51.922 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.922 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:51.922 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.922 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.922 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.922 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.922 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:51.922 03:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:52.180 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:52.180 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.180 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:52.180 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:52.180 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:52.180 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.180 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:52.180 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.180 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.180 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.180 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:52.180 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.180 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.746 00:20:52.746 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.746 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.747 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.747 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.747 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.747 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.747 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.747 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.747 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.747 { 00:20:52.747 "cntlid": 47, 00:20:52.747 "qid": 0, 00:20:52.747 "state": "enabled", 00:20:52.747 "thread": "nvmf_tgt_poll_group_000", 00:20:52.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:52.747 "listen_address": { 00:20:52.747 "trtype": "TCP", 00:20:52.747 "adrfam": "IPv4", 00:20:52.747 "traddr": "10.0.0.2", 00:20:52.747 "trsvcid": "4420" 00:20:52.747 }, 00:20:52.747 "peer_address": { 00:20:52.747 "trtype": "TCP", 00:20:52.747 "adrfam": "IPv4", 00:20:52.747 "traddr": "10.0.0.1", 00:20:52.747 "trsvcid": "53312" 00:20:52.747 }, 00:20:52.747 "auth": { 00:20:52.747 "state": "completed", 00:20:52.747 "digest": "sha256", 00:20:52.747 "dhgroup": "ffdhe8192" 00:20:52.747 } 00:20:52.747 } 00:20:52.747 ]' 00:20:52.747 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.006 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:53.006 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.006 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:53.006 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.006 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.006 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.006 03:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.264 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:20:53.264 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:20:53.832 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.832 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:53.832 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.832 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.832 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.832 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:53.832 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.832 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.832 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:53.832 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:53.832 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:53.832 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.832 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:53.832 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:53.832 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:53.832 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.832 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.832 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.832 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.832 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.832 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.832 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.832 03:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.090 00:20:54.090 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.090 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.090 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.350 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.350 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.350 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.350 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.350 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.350 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.350 { 00:20:54.350 "cntlid": 49, 00:20:54.350 "qid": 0, 00:20:54.350 "state": "enabled", 00:20:54.350 "thread": "nvmf_tgt_poll_group_000", 00:20:54.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:54.350 "listen_address": { 00:20:54.350 "trtype": "TCP", 00:20:54.350 "adrfam": "IPv4", 00:20:54.350 "traddr": "10.0.0.2", 00:20:54.350 "trsvcid": "4420" 00:20:54.350 }, 00:20:54.350 "peer_address": { 00:20:54.350 "trtype": "TCP", 00:20:54.350 "adrfam": "IPv4", 00:20:54.350 "traddr": "10.0.0.1", 00:20:54.350 "trsvcid": "53350" 00:20:54.350 }, 00:20:54.350 "auth": { 00:20:54.350 "state": "completed", 00:20:54.350 "digest": "sha384", 00:20:54.350 "dhgroup": "null" 00:20:54.350 } 00:20:54.350 } 00:20:54.350 ]' 00:20:54.350 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.350 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.350 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.350 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:54.350 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.609 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.609 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.609 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.609 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:20:54.609 03:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:20:55.176 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.176 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:55.176 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.176 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.176 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.176 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.176 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:55.176 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:55.435 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:55.435 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.435 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:55.435 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:55.435 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:55.435 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.435 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.435 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.435 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.435 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.435 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.435 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.435 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.695 00:20:55.695 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.695 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.695 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.953 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.953 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.953 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.953 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.953 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.953 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.953 { 00:20:55.953 "cntlid": 51, 00:20:55.953 "qid": 0, 00:20:55.953 "state": "enabled", 00:20:55.953 "thread": "nvmf_tgt_poll_group_000", 00:20:55.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:55.953 "listen_address": { 00:20:55.953 "trtype": "TCP", 00:20:55.953 "adrfam": "IPv4", 00:20:55.953 "traddr": "10.0.0.2", 00:20:55.953 "trsvcid": "4420" 00:20:55.953 }, 00:20:55.953 "peer_address": { 00:20:55.953 "trtype": "TCP", 00:20:55.953 "adrfam": "IPv4", 00:20:55.953 "traddr": "10.0.0.1", 00:20:55.953 "trsvcid": "42214" 00:20:55.953 }, 00:20:55.953 "auth": { 00:20:55.953 "state": "completed", 00:20:55.953 "digest": "sha384", 00:20:55.953 "dhgroup": "null" 00:20:55.953 } 00:20:55.953 } 00:20:55.953 ]' 00:20:55.953 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.953 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.953 03:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.953 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:55.953 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.953 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.953 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.953 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.212 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:20:56.212 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:20:56.779 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.779 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:56.779 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.779 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.779 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.779 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.779 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:56.779 03:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:57.037 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:57.037 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.037 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.037 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:57.037 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:57.037 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.037 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.037 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.037 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.037 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.037 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.037 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.037 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.296 00:20:57.296 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.296 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.296 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.554 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.554 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.554 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.554 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.554 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.554 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.554 { 00:20:57.554 "cntlid": 53, 00:20:57.554 "qid": 0, 00:20:57.554 "state": "enabled", 00:20:57.554 "thread": "nvmf_tgt_poll_group_000", 00:20:57.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:57.554 "listen_address": { 00:20:57.554 "trtype": "TCP", 00:20:57.554 "adrfam": "IPv4", 00:20:57.554 "traddr": "10.0.0.2", 00:20:57.554 "trsvcid": "4420" 00:20:57.555 }, 00:20:57.555 "peer_address": { 00:20:57.555 "trtype": "TCP", 00:20:57.555 "adrfam": "IPv4", 00:20:57.555 "traddr": "10.0.0.1", 00:20:57.555 "trsvcid": "42242" 00:20:57.555 }, 00:20:57.555 "auth": { 00:20:57.555 "state": "completed", 00:20:57.555 "digest": "sha384", 00:20:57.555 "dhgroup": "null" 00:20:57.555 } 00:20:57.555 } 00:20:57.555 ]' 00:20:57.555 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.555 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.555 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.555 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:57.555 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.555 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.555 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.555 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.813 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:20:57.813 03:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:20:58.380 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.380 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:58.380 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.380 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.380 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.380 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.380 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:58.380 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:58.639 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:58.639 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.639 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:58.639 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:58.639 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:58.639 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.639 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:58.639 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.639 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.639 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.639 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:58.639 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.639 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.897 00:20:58.897 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.897 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.897 03:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.155 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.155 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.155 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.155 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.155 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.155 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.155 { 00:20:59.155 "cntlid": 55, 00:20:59.155 "qid": 0, 00:20:59.155 "state": "enabled", 00:20:59.155 "thread": "nvmf_tgt_poll_group_000", 00:20:59.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:59.155 "listen_address": { 00:20:59.155 "trtype": "TCP", 00:20:59.155 "adrfam": "IPv4", 00:20:59.155 "traddr": "10.0.0.2", 00:20:59.156 "trsvcid": "4420" 00:20:59.156 }, 00:20:59.156 "peer_address": { 00:20:59.156 "trtype": "TCP", 00:20:59.156 "adrfam": "IPv4", 00:20:59.156 "traddr": "10.0.0.1", 00:20:59.156 "trsvcid": "42280" 00:20:59.156 }, 00:20:59.156 "auth": { 00:20:59.156 "state": "completed", 00:20:59.156 "digest": "sha384", 00:20:59.156 "dhgroup": "null" 00:20:59.156 } 00:20:59.156 } 00:20:59.156 ]' 00:20:59.156 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.156 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.156 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.156 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:59.156 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.156 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.156 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.156 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.414 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:20:59.414 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:20:59.980 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.980 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:59.980 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.980 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.980 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.980 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.980 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.980 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:59.980 03:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:00.237 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:21:00.237 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.237 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:00.237 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:00.237 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:00.237 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.237 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.237 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.237 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.237 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.237 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.237 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.237 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.495 00:21:00.495 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.495 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.495 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.753 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.753 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.753 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.753 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.753 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.753 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.753 { 00:21:00.753 "cntlid": 57, 00:21:00.753 "qid": 0, 00:21:00.753 "state": "enabled", 00:21:00.753 "thread": "nvmf_tgt_poll_group_000", 00:21:00.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:00.753 "listen_address": { 00:21:00.753 "trtype": "TCP", 00:21:00.753 "adrfam": "IPv4", 00:21:00.753 "traddr": "10.0.0.2", 00:21:00.753 "trsvcid": "4420" 00:21:00.753 }, 00:21:00.753 "peer_address": { 00:21:00.753 "trtype": "TCP", 00:21:00.753 "adrfam": "IPv4", 00:21:00.753 "traddr": "10.0.0.1", 00:21:00.753 "trsvcid": "42304" 00:21:00.753 }, 00:21:00.753 "auth": { 00:21:00.753 "state": "completed", 00:21:00.753 "digest": "sha384", 00:21:00.753 "dhgroup": "ffdhe2048" 00:21:00.753 } 00:21:00.753 } 00:21:00.753 ]' 00:21:00.753 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.753 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.754 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.754 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:00.754 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.754 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.754 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.754 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.012 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:21:01.012 03:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:21:01.578 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.578 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:01.578 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.578 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.578 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.578 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.578 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:01.578 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:01.578 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:21:01.578 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.578 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:01.578 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:01.578 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:01.578 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.578 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.578 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.578 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.836 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.836 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.836 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.836 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.836 00:21:02.095 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.095 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.095 03:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.095 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.095 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.095 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.095 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.095 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.095 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.095 { 00:21:02.095 "cntlid": 59, 00:21:02.095 "qid": 0, 00:21:02.095 "state": "enabled", 00:21:02.095 "thread": "nvmf_tgt_poll_group_000", 00:21:02.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:02.095 "listen_address": { 00:21:02.095 "trtype": "TCP", 00:21:02.095 "adrfam": "IPv4", 00:21:02.095 "traddr": "10.0.0.2", 00:21:02.095 "trsvcid": "4420" 00:21:02.095 }, 00:21:02.095 "peer_address": { 00:21:02.095 "trtype": "TCP", 00:21:02.095 "adrfam": "IPv4", 00:21:02.095 "traddr": "10.0.0.1", 00:21:02.095 "trsvcid": "42326" 00:21:02.095 }, 00:21:02.095 "auth": { 00:21:02.095 "state": "completed", 00:21:02.095 "digest": "sha384", 00:21:02.095 "dhgroup": "ffdhe2048" 00:21:02.095 } 00:21:02.095 } 00:21:02.095 ]' 00:21:02.095 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.095 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.095 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.353 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:02.353 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.353 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.353 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.353 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.612 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:21:02.612 03:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:21:03.179 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.179 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:03.179 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.179 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.179 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.179 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.179 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:03.179 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:03.179 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:03.179 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.179 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:03.179 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:03.180 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:03.180 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.180 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.180 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.180 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.180 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.180 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.180 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.180 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.438 00:21:03.438 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.438 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.438 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.697 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.697 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.697 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.697 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.697 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.697 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.697 { 00:21:03.697 "cntlid": 61, 00:21:03.697 "qid": 0, 00:21:03.697 "state": "enabled", 00:21:03.697 "thread": "nvmf_tgt_poll_group_000", 00:21:03.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:03.697 "listen_address": { 00:21:03.697 "trtype": "TCP", 00:21:03.697 "adrfam": "IPv4", 00:21:03.697 "traddr": "10.0.0.2", 00:21:03.697 "trsvcid": "4420" 00:21:03.697 }, 00:21:03.697 "peer_address": { 00:21:03.697 "trtype": "TCP", 00:21:03.697 "adrfam": "IPv4", 00:21:03.697 "traddr": "10.0.0.1", 00:21:03.697 "trsvcid": "42350" 00:21:03.697 }, 00:21:03.697 "auth": { 00:21:03.697 "state": "completed", 00:21:03.697 "digest": "sha384", 00:21:03.697 "dhgroup": "ffdhe2048" 00:21:03.697 } 00:21:03.697 } 00:21:03.697 ]' 00:21:03.697 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.697 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.697 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.956 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:03.956 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.956 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.956 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.956 03:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.956 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:21:03.956 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:21:04.524 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.524 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:04.524 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.524 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.524 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.524 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.524 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:04.524 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:04.783 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:21:04.783 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.783 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:04.783 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:04.783 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:04.783 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.783 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:04.783 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.783 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.783 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.783 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:04.783 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.783 03:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:05.042 00:21:05.042 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.042 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.042 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.301 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.301 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.301 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.301 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.301 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.301 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.301 { 00:21:05.301 "cntlid": 63, 00:21:05.301 "qid": 0, 00:21:05.301 "state": "enabled", 00:21:05.301 "thread": "nvmf_tgt_poll_group_000", 00:21:05.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:05.301 "listen_address": { 00:21:05.301 "trtype": "TCP", 00:21:05.301 "adrfam": "IPv4", 00:21:05.301 "traddr": "10.0.0.2", 00:21:05.301 "trsvcid": "4420" 00:21:05.301 }, 00:21:05.301 "peer_address": { 00:21:05.301 "trtype": "TCP", 00:21:05.301 "adrfam": "IPv4", 00:21:05.301 "traddr": "10.0.0.1", 00:21:05.301 "trsvcid": "42376" 00:21:05.301 }, 00:21:05.301 "auth": { 00:21:05.301 "state": "completed", 00:21:05.301 "digest": "sha384", 00:21:05.301 "dhgroup": "ffdhe2048" 00:21:05.301 } 00:21:05.301 } 00:21:05.301 ]' 00:21:05.301 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.301 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.301 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.301 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:05.301 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.301 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.301 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.301 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.560 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:21:05.560 03:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:21:06.126 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.126 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:06.126 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.126 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.126 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.126 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.126 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.126 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:06.126 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:06.384 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:06.384 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.384 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:06.384 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:06.384 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:06.384 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.384 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.384 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.384 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.384 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.384 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.384 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.384 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.642 00:21:06.642 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.642 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.642 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.901 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.901 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.901 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.901 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.901 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.901 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.901 { 00:21:06.901 "cntlid": 65, 00:21:06.901 "qid": 0, 00:21:06.901 "state": "enabled", 00:21:06.901 "thread": "nvmf_tgt_poll_group_000", 00:21:06.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:06.901 "listen_address": { 00:21:06.901 "trtype": "TCP", 00:21:06.901 "adrfam": "IPv4", 00:21:06.901 "traddr": "10.0.0.2", 00:21:06.901 "trsvcid": "4420" 00:21:06.901 }, 00:21:06.901 "peer_address": { 00:21:06.901 "trtype": "TCP", 00:21:06.901 "adrfam": "IPv4", 00:21:06.901 "traddr": "10.0.0.1", 00:21:06.901 "trsvcid": "49348" 00:21:06.901 }, 00:21:06.901 "auth": { 00:21:06.901 "state": "completed", 00:21:06.901 "digest": "sha384", 00:21:06.901 "dhgroup": "ffdhe3072" 00:21:06.901 } 00:21:06.901 } 00:21:06.901 ]' 00:21:06.901 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.901 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.901 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.901 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:06.901 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.901 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.901 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.901 03:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.159 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:21:07.159 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:21:07.727 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.727 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:07.727 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.727 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.727 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.727 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.727 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:07.727 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:07.987 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:07.987 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.987 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:07.987 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:07.987 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:07.987 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.987 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.987 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.987 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.987 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.987 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.987 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.987 03:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.246 00:21:08.246 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.246 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.246 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.505 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.505 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.505 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.505 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.505 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.505 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.505 { 00:21:08.505 "cntlid": 67, 00:21:08.505 "qid": 0, 00:21:08.505 "state": "enabled", 00:21:08.505 "thread": "nvmf_tgt_poll_group_000", 00:21:08.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:08.505 "listen_address": { 00:21:08.505 "trtype": "TCP", 00:21:08.505 "adrfam": "IPv4", 00:21:08.505 "traddr": "10.0.0.2", 00:21:08.505 "trsvcid": "4420" 00:21:08.505 }, 00:21:08.505 "peer_address": { 00:21:08.505 "trtype": "TCP", 00:21:08.505 "adrfam": "IPv4", 00:21:08.505 "traddr": "10.0.0.1", 00:21:08.505 "trsvcid": "49378" 00:21:08.505 }, 00:21:08.505 "auth": { 00:21:08.505 "state": "completed", 00:21:08.505 "digest": "sha384", 00:21:08.505 "dhgroup": "ffdhe3072" 00:21:08.505 } 00:21:08.505 } 00:21:08.505 ]' 00:21:08.505 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.505 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.505 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.505 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:08.505 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.505 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.505 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.505 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.764 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:21:08.764 03:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:21:09.331 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.331 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:09.331 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.331 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.331 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.331 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.331 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:09.331 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:09.591 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:09.591 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.591 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:09.591 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:09.591 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:09.591 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.591 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.591 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.591 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.591 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.591 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.591 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.591 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.850 00:21:09.850 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.850 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.850 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.111 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.111 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.111 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.111 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.111 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.111 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.111 { 00:21:10.111 "cntlid": 69, 00:21:10.111 "qid": 0, 00:21:10.111 "state": "enabled", 00:21:10.111 "thread": "nvmf_tgt_poll_group_000", 00:21:10.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:10.111 "listen_address": { 00:21:10.111 "trtype": "TCP", 00:21:10.111 "adrfam": "IPv4", 00:21:10.111 "traddr": "10.0.0.2", 00:21:10.111 "trsvcid": "4420" 00:21:10.111 }, 00:21:10.111 "peer_address": { 00:21:10.111 "trtype": "TCP", 00:21:10.111 "adrfam": "IPv4", 00:21:10.111 "traddr": "10.0.0.1", 00:21:10.111 "trsvcid": "49402" 00:21:10.111 }, 00:21:10.111 "auth": { 00:21:10.111 "state": "completed", 00:21:10.111 "digest": "sha384", 00:21:10.111 "dhgroup": "ffdhe3072" 00:21:10.111 } 00:21:10.111 } 00:21:10.111 ]' 00:21:10.111 03:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.111 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.111 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.111 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:10.111 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.111 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.111 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.111 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.371 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:21:10.371 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:21:10.939 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.939 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:10.939 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.939 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.939 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.939 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.939 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:10.939 03:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:11.197 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:11.198 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.198 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:11.198 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:11.198 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:11.198 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.198 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:11.198 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.198 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.198 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.198 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:11.198 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.198 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.198 00:21:11.457 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.457 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.457 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.457 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.457 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.457 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.457 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.457 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.457 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.457 { 00:21:11.457 "cntlid": 71, 00:21:11.457 "qid": 0, 00:21:11.457 "state": "enabled", 00:21:11.457 "thread": "nvmf_tgt_poll_group_000", 00:21:11.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:11.457 "listen_address": { 00:21:11.457 "trtype": "TCP", 00:21:11.457 "adrfam": "IPv4", 00:21:11.457 "traddr": "10.0.0.2", 00:21:11.457 "trsvcid": "4420" 00:21:11.457 }, 00:21:11.457 "peer_address": { 00:21:11.457 "trtype": "TCP", 00:21:11.457 "adrfam": "IPv4", 00:21:11.457 "traddr": "10.0.0.1", 00:21:11.457 "trsvcid": "49422" 00:21:11.457 }, 00:21:11.457 "auth": { 00:21:11.457 "state": "completed", 00:21:11.457 "digest": "sha384", 00:21:11.457 "dhgroup": "ffdhe3072" 00:21:11.457 } 00:21:11.457 } 00:21:11.457 ]' 00:21:11.457 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.716 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.716 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.716 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:11.716 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.716 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.716 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.716 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.979 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:21:11.979 03:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:21:12.547 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.547 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:12.547 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.547 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.547 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.547 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.547 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.547 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:12.547 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:12.547 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:12.547 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.547 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:12.547 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:12.547 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:12.547 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.547 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.547 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.547 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.547 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.547 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.547 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.547 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.806 00:21:12.806 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.806 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.806 03:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.064 03:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.064 03:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.064 03:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.064 03:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.064 03:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.064 03:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.064 { 00:21:13.064 "cntlid": 73, 00:21:13.064 "qid": 0, 00:21:13.064 "state": "enabled", 00:21:13.064 "thread": "nvmf_tgt_poll_group_000", 00:21:13.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:13.064 "listen_address": { 00:21:13.064 "trtype": "TCP", 00:21:13.064 "adrfam": "IPv4", 00:21:13.064 "traddr": "10.0.0.2", 00:21:13.064 "trsvcid": "4420" 00:21:13.064 }, 00:21:13.064 "peer_address": { 00:21:13.064 "trtype": "TCP", 00:21:13.064 "adrfam": "IPv4", 00:21:13.065 "traddr": "10.0.0.1", 00:21:13.065 "trsvcid": "49464" 00:21:13.065 }, 00:21:13.065 "auth": { 00:21:13.065 "state": "completed", 00:21:13.065 "digest": "sha384", 00:21:13.065 "dhgroup": "ffdhe4096" 00:21:13.065 } 00:21:13.065 } 00:21:13.065 ]' 00:21:13.065 03:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.065 03:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.065 03:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.323 03:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:13.323 03:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.323 03:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.323 03:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.323 03:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.582 03:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:21:13.582 03:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:21:14.149 03:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.149 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:14.149 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.149 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.149 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.149 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.149 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:14.149 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:14.149 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:14.149 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.149 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:14.149 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:14.149 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:14.149 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.149 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.149 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.149 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.149 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.149 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.149 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.149 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.407 00:21:14.407 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.407 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.407 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.666 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.666 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.666 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.666 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.666 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.666 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.666 { 00:21:14.666 "cntlid": 75, 00:21:14.666 "qid": 0, 00:21:14.666 "state": "enabled", 00:21:14.666 "thread": "nvmf_tgt_poll_group_000", 00:21:14.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:14.666 "listen_address": { 00:21:14.666 "trtype": "TCP", 00:21:14.666 "adrfam": "IPv4", 00:21:14.666 "traddr": "10.0.0.2", 00:21:14.666 "trsvcid": "4420" 00:21:14.666 }, 00:21:14.666 "peer_address": { 00:21:14.666 "trtype": "TCP", 00:21:14.666 "adrfam": "IPv4", 00:21:14.666 "traddr": "10.0.0.1", 00:21:14.666 "trsvcid": "49476" 00:21:14.666 }, 00:21:14.666 "auth": { 00:21:14.666 "state": "completed", 00:21:14.666 "digest": "sha384", 00:21:14.666 "dhgroup": "ffdhe4096" 00:21:14.666 } 00:21:14.666 } 00:21:14.666 ]' 00:21:14.666 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.666 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.666 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.924 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:14.924 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.924 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.924 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.924 03:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.924 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:21:14.924 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:21:15.491 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.492 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:15.492 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.492 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.492 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.492 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.492 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:15.492 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:15.751 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:15.751 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.751 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:15.751 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:15.751 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:15.751 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.751 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.751 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.751 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.751 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.751 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.751 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.751 03:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.009 00:21:16.009 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.009 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.009 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.268 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.268 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.268 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.268 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.268 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.268 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.268 { 00:21:16.268 "cntlid": 77, 00:21:16.268 "qid": 0, 00:21:16.268 "state": "enabled", 00:21:16.268 "thread": "nvmf_tgt_poll_group_000", 00:21:16.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:16.268 "listen_address": { 00:21:16.268 "trtype": "TCP", 00:21:16.268 "adrfam": "IPv4", 00:21:16.268 "traddr": "10.0.0.2", 00:21:16.268 "trsvcid": "4420" 00:21:16.268 }, 00:21:16.268 "peer_address": { 00:21:16.268 "trtype": "TCP", 00:21:16.268 "adrfam": "IPv4", 00:21:16.268 "traddr": "10.0.0.1", 00:21:16.268 "trsvcid": "33104" 00:21:16.268 }, 00:21:16.268 "auth": { 00:21:16.268 "state": "completed", 00:21:16.268 "digest": "sha384", 00:21:16.268 "dhgroup": "ffdhe4096" 00:21:16.268 } 00:21:16.268 } 00:21:16.268 ]' 00:21:16.269 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.269 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.269 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.269 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:16.269 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.527 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.527 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.527 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.527 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:21:16.527 03:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:21:17.095 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.095 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:17.095 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.095 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.095 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.095 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.095 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:17.095 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:17.354 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:17.354 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.354 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:17.354 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:17.354 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:17.354 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.354 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:17.354 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.354 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.354 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.354 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:17.354 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.354 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.613 00:21:17.613 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.613 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.613 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.872 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.872 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.872 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.872 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.872 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.872 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.872 { 00:21:17.872 "cntlid": 79, 00:21:17.872 "qid": 0, 00:21:17.872 "state": "enabled", 00:21:17.872 "thread": "nvmf_tgt_poll_group_000", 00:21:17.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:17.872 "listen_address": { 00:21:17.872 "trtype": "TCP", 00:21:17.872 "adrfam": "IPv4", 00:21:17.872 "traddr": "10.0.0.2", 00:21:17.872 "trsvcid": "4420" 00:21:17.872 }, 00:21:17.872 "peer_address": { 00:21:17.872 "trtype": "TCP", 00:21:17.872 "adrfam": "IPv4", 00:21:17.872 "traddr": "10.0.0.1", 00:21:17.872 "trsvcid": "33126" 00:21:17.872 }, 00:21:17.872 "auth": { 00:21:17.872 "state": "completed", 00:21:17.872 "digest": "sha384", 00:21:17.872 "dhgroup": "ffdhe4096" 00:21:17.872 } 00:21:17.872 } 00:21:17.872 ]' 00:21:17.872 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.872 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.872 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.872 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:17.872 03:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.131 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.131 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.131 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.131 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:21:18.131 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:21:18.699 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.699 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:18.699 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.699 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.699 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.699 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:18.699 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.699 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:18.699 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:18.958 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:18.958 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.958 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:18.958 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:18.958 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:18.958 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.958 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.958 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.958 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.958 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.958 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.958 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.958 03:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.217 00:21:19.217 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.217 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.217 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.476 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.476 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.476 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.476 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.476 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.476 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.476 { 00:21:19.476 "cntlid": 81, 00:21:19.476 "qid": 0, 00:21:19.476 "state": "enabled", 00:21:19.476 "thread": "nvmf_tgt_poll_group_000", 00:21:19.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:19.476 "listen_address": { 00:21:19.476 "trtype": "TCP", 00:21:19.476 "adrfam": "IPv4", 00:21:19.476 "traddr": "10.0.0.2", 00:21:19.476 "trsvcid": "4420" 00:21:19.476 }, 00:21:19.476 "peer_address": { 00:21:19.476 "trtype": "TCP", 00:21:19.476 "adrfam": "IPv4", 00:21:19.476 "traddr": "10.0.0.1", 00:21:19.476 "trsvcid": "33136" 00:21:19.476 }, 00:21:19.476 "auth": { 00:21:19.476 "state": "completed", 00:21:19.476 "digest": "sha384", 00:21:19.476 "dhgroup": "ffdhe6144" 00:21:19.476 } 00:21:19.476 } 00:21:19.476 ]' 00:21:19.476 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.476 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.476 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.734 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:19.734 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.734 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.734 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.734 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.993 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:21:19.993 03:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:21:20.251 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.509 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:20.509 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.509 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.509 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.509 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.509 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:20.509 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:20.509 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:20.509 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.509 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:20.509 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:20.509 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:20.509 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.509 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.509 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.509 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.509 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.509 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.509 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.509 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.077 00:21:21.077 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.077 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.077 03:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.077 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.077 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.077 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.077 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.077 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.077 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.077 { 00:21:21.077 "cntlid": 83, 00:21:21.077 "qid": 0, 00:21:21.077 "state": "enabled", 00:21:21.077 "thread": "nvmf_tgt_poll_group_000", 00:21:21.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:21.077 "listen_address": { 00:21:21.077 "trtype": "TCP", 00:21:21.077 "adrfam": "IPv4", 00:21:21.077 "traddr": "10.0.0.2", 00:21:21.077 "trsvcid": "4420" 00:21:21.077 }, 00:21:21.077 "peer_address": { 00:21:21.077 "trtype": "TCP", 00:21:21.077 "adrfam": "IPv4", 00:21:21.077 "traddr": "10.0.0.1", 00:21:21.077 "trsvcid": "33156" 00:21:21.077 }, 00:21:21.077 "auth": { 00:21:21.077 "state": "completed", 00:21:21.077 "digest": "sha384", 00:21:21.077 "dhgroup": "ffdhe6144" 00:21:21.077 } 00:21:21.077 } 00:21:21.077 ]' 00:21:21.077 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.336 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.336 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.336 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:21.336 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.336 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.336 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.336 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.594 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:21:21.594 03:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:21:22.161 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.161 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:22.161 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.161 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.161 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.161 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.161 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:22.161 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:22.161 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:22.161 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.161 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:22.161 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:22.161 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:22.161 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.161 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.161 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.161 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.161 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.161 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.161 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.161 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.727 00:21:22.727 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.727 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.727 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.727 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.727 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.727 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.727 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.727 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.727 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.727 { 00:21:22.727 "cntlid": 85, 00:21:22.727 "qid": 0, 00:21:22.727 "state": "enabled", 00:21:22.727 "thread": "nvmf_tgt_poll_group_000", 00:21:22.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:22.727 "listen_address": { 00:21:22.727 "trtype": "TCP", 00:21:22.727 "adrfam": "IPv4", 00:21:22.727 "traddr": "10.0.0.2", 00:21:22.727 "trsvcid": "4420" 00:21:22.727 }, 00:21:22.727 "peer_address": { 00:21:22.727 "trtype": "TCP", 00:21:22.727 "adrfam": "IPv4", 00:21:22.727 "traddr": "10.0.0.1", 00:21:22.727 "trsvcid": "33170" 00:21:22.727 }, 00:21:22.727 "auth": { 00:21:22.727 "state": "completed", 00:21:22.727 "digest": "sha384", 00:21:22.727 "dhgroup": "ffdhe6144" 00:21:22.727 } 00:21:22.727 } 00:21:22.727 ]' 00:21:22.727 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.986 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.986 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.986 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:22.986 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.986 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.986 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.986 03:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.245 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:21:23.245 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:21:23.812 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.812 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:23.812 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.812 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.812 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.812 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.812 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:23.812 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:23.812 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:23.812 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.812 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:23.812 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:23.812 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:23.812 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.812 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:23.812 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.812 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.812 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.812 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:23.812 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:23.812 03:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.379 00:21:24.379 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.379 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.379 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.379 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.379 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.379 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.379 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.379 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.379 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.379 { 00:21:24.379 "cntlid": 87, 00:21:24.379 "qid": 0, 00:21:24.379 "state": "enabled", 00:21:24.379 "thread": "nvmf_tgt_poll_group_000", 00:21:24.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:24.379 "listen_address": { 00:21:24.379 "trtype": "TCP", 00:21:24.379 "adrfam": "IPv4", 00:21:24.379 "traddr": "10.0.0.2", 00:21:24.379 "trsvcid": "4420" 00:21:24.379 }, 00:21:24.379 "peer_address": { 00:21:24.379 "trtype": "TCP", 00:21:24.379 "adrfam": "IPv4", 00:21:24.379 "traddr": "10.0.0.1", 00:21:24.379 "trsvcid": "33198" 00:21:24.379 }, 00:21:24.379 "auth": { 00:21:24.379 "state": "completed", 00:21:24.379 "digest": "sha384", 00:21:24.379 "dhgroup": "ffdhe6144" 00:21:24.379 } 00:21:24.379 } 00:21:24.379 ]' 00:21:24.379 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.638 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.638 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.638 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:24.638 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.638 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.638 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.638 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.897 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:21:24.897 03:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:21:25.464 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.464 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:25.464 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.464 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.464 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.464 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:25.464 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.464 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:25.464 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:25.464 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:25.464 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.464 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:25.464 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:25.464 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:25.464 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.464 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.464 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.464 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.722 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.722 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.722 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.722 03:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.980 00:21:25.980 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.980 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.980 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.239 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.239 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.239 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.239 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.239 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.239 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.239 { 00:21:26.239 "cntlid": 89, 00:21:26.239 "qid": 0, 00:21:26.239 "state": "enabled", 00:21:26.239 "thread": "nvmf_tgt_poll_group_000", 00:21:26.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:26.239 "listen_address": { 00:21:26.239 "trtype": "TCP", 00:21:26.239 "adrfam": "IPv4", 00:21:26.239 "traddr": "10.0.0.2", 00:21:26.239 "trsvcid": "4420" 00:21:26.239 }, 00:21:26.239 "peer_address": { 00:21:26.239 "trtype": "TCP", 00:21:26.239 "adrfam": "IPv4", 00:21:26.239 "traddr": "10.0.0.1", 00:21:26.239 "trsvcid": "33638" 00:21:26.239 }, 00:21:26.239 "auth": { 00:21:26.239 "state": "completed", 00:21:26.239 "digest": "sha384", 00:21:26.239 "dhgroup": "ffdhe8192" 00:21:26.239 } 00:21:26.239 } 00:21:26.239 ]' 00:21:26.239 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.239 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.239 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.498 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:26.498 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.498 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.498 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.498 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.756 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:21:26.757 03:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:21:27.324 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.324 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:27.324 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.324 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.324 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.324 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.324 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:27.324 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:27.324 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:27.324 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.324 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:27.324 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:27.324 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:27.324 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.324 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.324 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.324 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.324 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.324 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.324 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.324 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.892 00:21:27.892 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.892 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.892 03:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.151 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.151 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.151 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.151 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.151 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.151 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.151 { 00:21:28.151 "cntlid": 91, 00:21:28.151 "qid": 0, 00:21:28.151 "state": "enabled", 00:21:28.151 "thread": "nvmf_tgt_poll_group_000", 00:21:28.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:28.151 "listen_address": { 00:21:28.151 "trtype": "TCP", 00:21:28.151 "adrfam": "IPv4", 00:21:28.151 "traddr": "10.0.0.2", 00:21:28.151 "trsvcid": "4420" 00:21:28.151 }, 00:21:28.151 "peer_address": { 00:21:28.151 "trtype": "TCP", 00:21:28.151 "adrfam": "IPv4", 00:21:28.151 "traddr": "10.0.0.1", 00:21:28.151 "trsvcid": "33666" 00:21:28.151 }, 00:21:28.151 "auth": { 00:21:28.151 "state": "completed", 00:21:28.151 "digest": "sha384", 00:21:28.151 "dhgroup": "ffdhe8192" 00:21:28.151 } 00:21:28.151 } 00:21:28.151 ]' 00:21:28.151 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.151 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:28.151 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.151 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:28.151 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.151 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.151 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.151 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.410 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:21:28.410 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:21:28.977 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.977 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:28.977 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.977 03:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.977 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.977 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.977 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.977 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:29.236 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:29.236 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.236 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:29.236 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:29.236 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:29.236 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.236 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.236 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.236 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.236 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.236 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.236 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.236 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.804 00:21:29.804 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.804 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.804 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.804 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.804 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.804 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.804 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.804 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.804 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.804 { 00:21:29.804 "cntlid": 93, 00:21:29.804 "qid": 0, 00:21:29.804 "state": "enabled", 00:21:29.804 "thread": "nvmf_tgt_poll_group_000", 00:21:29.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:29.804 "listen_address": { 00:21:29.804 "trtype": "TCP", 00:21:29.804 "adrfam": "IPv4", 00:21:29.804 "traddr": "10.0.0.2", 00:21:29.804 "trsvcid": "4420" 00:21:29.804 }, 00:21:29.804 "peer_address": { 00:21:29.804 "trtype": "TCP", 00:21:29.804 "adrfam": "IPv4", 00:21:29.804 "traddr": "10.0.0.1", 00:21:29.804 "trsvcid": "33704" 00:21:29.804 }, 00:21:29.804 "auth": { 00:21:29.804 "state": "completed", 00:21:29.804 "digest": "sha384", 00:21:29.804 "dhgroup": "ffdhe8192" 00:21:29.804 } 00:21:29.804 } 00:21:29.804 ]' 00:21:29.804 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.063 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:30.063 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.063 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:30.063 03:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.063 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.063 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.063 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.321 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:21:30.321 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:21:30.887 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.887 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:30.887 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.887 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.887 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.887 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.888 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:30.888 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:30.888 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:30.888 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.888 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:30.888 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:30.888 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:30.888 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.888 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:30.888 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.888 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.888 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.888 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:30.888 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:30.888 03:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:31.455 00:21:31.455 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.455 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.455 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.714 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.714 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.714 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.714 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.714 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.714 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.714 { 00:21:31.714 "cntlid": 95, 00:21:31.714 "qid": 0, 00:21:31.714 "state": "enabled", 00:21:31.714 "thread": "nvmf_tgt_poll_group_000", 00:21:31.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:31.714 "listen_address": { 00:21:31.714 "trtype": "TCP", 00:21:31.714 "adrfam": "IPv4", 00:21:31.714 "traddr": "10.0.0.2", 00:21:31.714 "trsvcid": "4420" 00:21:31.714 }, 00:21:31.714 "peer_address": { 00:21:31.714 "trtype": "TCP", 00:21:31.714 "adrfam": "IPv4", 00:21:31.714 "traddr": "10.0.0.1", 00:21:31.714 "trsvcid": "33734" 00:21:31.714 }, 00:21:31.714 "auth": { 00:21:31.714 "state": "completed", 00:21:31.714 "digest": "sha384", 00:21:31.714 "dhgroup": "ffdhe8192" 00:21:31.714 } 00:21:31.714 } 00:21:31.714 ]' 00:21:31.714 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.714 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.714 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.714 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:31.714 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.714 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.714 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.714 03:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.973 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:21:31.973 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:21:32.540 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.540 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:32.540 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.540 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.540 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.540 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:32.540 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:32.540 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.540 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:32.540 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:32.799 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:32.799 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.799 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:32.799 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:32.799 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:32.799 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.799 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.799 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.799 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.799 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.799 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.799 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.799 03:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.057 00:21:33.058 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.058 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.058 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.317 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.317 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.317 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.317 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.317 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.317 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.317 { 00:21:33.317 "cntlid": 97, 00:21:33.317 "qid": 0, 00:21:33.317 "state": "enabled", 00:21:33.317 "thread": "nvmf_tgt_poll_group_000", 00:21:33.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:33.317 "listen_address": { 00:21:33.317 "trtype": "TCP", 00:21:33.317 "adrfam": "IPv4", 00:21:33.317 "traddr": "10.0.0.2", 00:21:33.317 "trsvcid": "4420" 00:21:33.317 }, 00:21:33.317 "peer_address": { 00:21:33.317 "trtype": "TCP", 00:21:33.317 "adrfam": "IPv4", 00:21:33.317 "traddr": "10.0.0.1", 00:21:33.317 "trsvcid": "33764" 00:21:33.317 }, 00:21:33.317 "auth": { 00:21:33.317 "state": "completed", 00:21:33.317 "digest": "sha512", 00:21:33.317 "dhgroup": "null" 00:21:33.317 } 00:21:33.317 } 00:21:33.317 ]' 00:21:33.317 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.317 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.317 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.317 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:33.317 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.317 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.317 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.317 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.576 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:21:33.576 03:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:21:34.143 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.143 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:34.143 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.143 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.143 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.143 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.143 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:34.143 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:34.402 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:34.402 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.402 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:34.402 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:34.402 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:34.402 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.402 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.402 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.402 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.402 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.402 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.402 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.402 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.661 00:21:34.661 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.661 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.661 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.920 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.920 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.920 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.920 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.920 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.920 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.920 { 00:21:34.920 "cntlid": 99, 00:21:34.920 "qid": 0, 00:21:34.920 "state": "enabled", 00:21:34.920 "thread": "nvmf_tgt_poll_group_000", 00:21:34.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:34.920 "listen_address": { 00:21:34.920 "trtype": "TCP", 00:21:34.920 "adrfam": "IPv4", 00:21:34.920 "traddr": "10.0.0.2", 00:21:34.920 "trsvcid": "4420" 00:21:34.920 }, 00:21:34.920 "peer_address": { 00:21:34.920 "trtype": "TCP", 00:21:34.920 "adrfam": "IPv4", 00:21:34.920 "traddr": "10.0.0.1", 00:21:34.920 "trsvcid": "33800" 00:21:34.920 }, 00:21:34.920 "auth": { 00:21:34.920 "state": "completed", 00:21:34.920 "digest": "sha512", 00:21:34.920 "dhgroup": "null" 00:21:34.920 } 00:21:34.920 } 00:21:34.920 ]' 00:21:34.920 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.920 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.920 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.920 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:34.920 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.920 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.920 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.920 03:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.179 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:21:35.179 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:21:35.748 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.748 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:35.748 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.748 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.748 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.748 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.748 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:35.748 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:36.007 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:36.007 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.007 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.007 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:36.007 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:36.007 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.008 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.008 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.008 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.008 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.008 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.008 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.008 03:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.267 00:21:36.267 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.267 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.267 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.267 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.267 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.267 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.267 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.526 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.526 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.526 { 00:21:36.526 "cntlid": 101, 00:21:36.526 "qid": 0, 00:21:36.526 "state": "enabled", 00:21:36.526 "thread": "nvmf_tgt_poll_group_000", 00:21:36.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:36.526 "listen_address": { 00:21:36.526 "trtype": "TCP", 00:21:36.526 "adrfam": "IPv4", 00:21:36.526 "traddr": "10.0.0.2", 00:21:36.526 "trsvcid": "4420" 00:21:36.526 }, 00:21:36.526 "peer_address": { 00:21:36.526 "trtype": "TCP", 00:21:36.526 "adrfam": "IPv4", 00:21:36.526 "traddr": "10.0.0.1", 00:21:36.526 "trsvcid": "47856" 00:21:36.526 }, 00:21:36.526 "auth": { 00:21:36.526 "state": "completed", 00:21:36.526 "digest": "sha512", 00:21:36.526 "dhgroup": "null" 00:21:36.526 } 00:21:36.526 } 00:21:36.526 ]' 00:21:36.526 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.526 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.526 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.526 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:36.526 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.526 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.526 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.526 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.785 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:21:36.785 03:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:21:37.352 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.352 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:37.352 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.352 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.352 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.352 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.352 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:37.352 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:37.611 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:37.611 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.611 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.611 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:37.611 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:37.611 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.611 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:37.611 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.611 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.611 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.611 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:37.611 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.611 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.871 00:21:37.871 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.871 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.871 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.871 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.871 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.871 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.871 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.871 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.871 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.871 { 00:21:37.871 "cntlid": 103, 00:21:37.871 "qid": 0, 00:21:37.871 "state": "enabled", 00:21:37.871 "thread": "nvmf_tgt_poll_group_000", 00:21:37.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:37.871 "listen_address": { 00:21:37.871 "trtype": "TCP", 00:21:37.871 "adrfam": "IPv4", 00:21:37.871 "traddr": "10.0.0.2", 00:21:37.871 "trsvcid": "4420" 00:21:37.871 }, 00:21:37.871 "peer_address": { 00:21:37.871 "trtype": "TCP", 00:21:37.871 "adrfam": "IPv4", 00:21:37.871 "traddr": "10.0.0.1", 00:21:37.871 "trsvcid": "47892" 00:21:37.871 }, 00:21:37.871 "auth": { 00:21:37.871 "state": "completed", 00:21:37.871 "digest": "sha512", 00:21:37.871 "dhgroup": "null" 00:21:37.871 } 00:21:37.871 } 00:21:37.871 ]' 00:21:37.871 03:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.130 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.130 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.130 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:38.130 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.130 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.130 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.130 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.388 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:21:38.388 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:21:38.956 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.956 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:38.956 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.956 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.956 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.956 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:38.956 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.956 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:38.956 03:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:39.214 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:39.214 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.214 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:39.214 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:39.214 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:39.214 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.214 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.214 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.214 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.214 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.214 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.214 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.214 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.214 00:21:39.473 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.473 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.473 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.473 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.473 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.473 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.473 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.473 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.473 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.473 { 00:21:39.473 "cntlid": 105, 00:21:39.473 "qid": 0, 00:21:39.473 "state": "enabled", 00:21:39.473 "thread": "nvmf_tgt_poll_group_000", 00:21:39.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:39.473 "listen_address": { 00:21:39.473 "trtype": "TCP", 00:21:39.473 "adrfam": "IPv4", 00:21:39.473 "traddr": "10.0.0.2", 00:21:39.473 "trsvcid": "4420" 00:21:39.473 }, 00:21:39.473 "peer_address": { 00:21:39.473 "trtype": "TCP", 00:21:39.473 "adrfam": "IPv4", 00:21:39.473 "traddr": "10.0.0.1", 00:21:39.473 "trsvcid": "47924" 00:21:39.473 }, 00:21:39.473 "auth": { 00:21:39.473 "state": "completed", 00:21:39.473 "digest": "sha512", 00:21:39.473 "dhgroup": "ffdhe2048" 00:21:39.473 } 00:21:39.473 } 00:21:39.473 ]' 00:21:39.473 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.732 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.732 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.732 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:39.732 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.732 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.732 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.732 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.991 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:21:39.991 03:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:21:40.557 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.557 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:40.557 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.557 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.557 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.557 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.557 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:40.557 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:40.557 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:40.557 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.557 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.557 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:40.557 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:40.557 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.557 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.557 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.558 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.558 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.558 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.558 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.558 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.816 00:21:40.816 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.816 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.816 03:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.075 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.075 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.075 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.075 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.075 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.075 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.075 { 00:21:41.075 "cntlid": 107, 00:21:41.075 "qid": 0, 00:21:41.075 "state": "enabled", 00:21:41.075 "thread": "nvmf_tgt_poll_group_000", 00:21:41.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:41.075 "listen_address": { 00:21:41.075 "trtype": "TCP", 00:21:41.075 "adrfam": "IPv4", 00:21:41.075 "traddr": "10.0.0.2", 00:21:41.075 "trsvcid": "4420" 00:21:41.075 }, 00:21:41.075 "peer_address": { 00:21:41.075 "trtype": "TCP", 00:21:41.075 "adrfam": "IPv4", 00:21:41.075 "traddr": "10.0.0.1", 00:21:41.075 "trsvcid": "47944" 00:21:41.075 }, 00:21:41.075 "auth": { 00:21:41.075 "state": "completed", 00:21:41.075 "digest": "sha512", 00:21:41.075 "dhgroup": "ffdhe2048" 00:21:41.075 } 00:21:41.075 } 00:21:41.075 ]' 00:21:41.075 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.075 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.075 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.334 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:41.334 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.334 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.334 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.334 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.592 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:21:41.592 03:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:21:42.159 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.159 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:42.159 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.159 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.159 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.159 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.159 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:42.159 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:42.159 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:42.159 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.159 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.159 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:42.159 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:42.159 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.159 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.159 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.159 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.159 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.159 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.159 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.159 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.418 00:21:42.418 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.418 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.418 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.677 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.677 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.677 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.677 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.677 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.677 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.677 { 00:21:42.677 "cntlid": 109, 00:21:42.677 "qid": 0, 00:21:42.677 "state": "enabled", 00:21:42.677 "thread": "nvmf_tgt_poll_group_000", 00:21:42.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:42.677 "listen_address": { 00:21:42.677 "trtype": "TCP", 00:21:42.677 "adrfam": "IPv4", 00:21:42.677 "traddr": "10.0.0.2", 00:21:42.677 "trsvcid": "4420" 00:21:42.677 }, 00:21:42.677 "peer_address": { 00:21:42.677 "trtype": "TCP", 00:21:42.677 "adrfam": "IPv4", 00:21:42.677 "traddr": "10.0.0.1", 00:21:42.677 "trsvcid": "47962" 00:21:42.677 }, 00:21:42.677 "auth": { 00:21:42.677 "state": "completed", 00:21:42.677 "digest": "sha512", 00:21:42.677 "dhgroup": "ffdhe2048" 00:21:42.677 } 00:21:42.677 } 00:21:42.677 ]' 00:21:42.677 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.677 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.677 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.677 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:42.936 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.936 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.936 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.936 03:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.936 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:21:42.936 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:21:43.503 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.503 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:43.503 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.503 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.503 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.503 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.503 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:43.503 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:43.762 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:43.762 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.762 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:43.762 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:43.762 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:43.762 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.762 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:43.762 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.762 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.762 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.762 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:43.762 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.762 03:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.021 00:21:44.021 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.021 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.021 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.279 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.279 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.279 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.279 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.279 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.279 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.279 { 00:21:44.279 "cntlid": 111, 00:21:44.279 "qid": 0, 00:21:44.279 "state": "enabled", 00:21:44.279 "thread": "nvmf_tgt_poll_group_000", 00:21:44.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:44.279 "listen_address": { 00:21:44.279 "trtype": "TCP", 00:21:44.279 "adrfam": "IPv4", 00:21:44.279 "traddr": "10.0.0.2", 00:21:44.279 "trsvcid": "4420" 00:21:44.279 }, 00:21:44.279 "peer_address": { 00:21:44.279 "trtype": "TCP", 00:21:44.279 "adrfam": "IPv4", 00:21:44.279 "traddr": "10.0.0.1", 00:21:44.279 "trsvcid": "47980" 00:21:44.279 }, 00:21:44.279 "auth": { 00:21:44.279 "state": "completed", 00:21:44.279 "digest": "sha512", 00:21:44.279 "dhgroup": "ffdhe2048" 00:21:44.279 } 00:21:44.279 } 00:21:44.279 ]' 00:21:44.279 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.279 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.279 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.279 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:44.279 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.538 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.538 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.538 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.538 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:21:44.538 03:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:21:45.105 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.105 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:45.105 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.105 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.105 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.105 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:45.105 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.106 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:45.106 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:45.364 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:45.364 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.365 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.365 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:45.365 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:45.365 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.365 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.365 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.365 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.365 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.365 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.365 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.365 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.624 00:21:45.624 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.624 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.624 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.882 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.882 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.883 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.883 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.883 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.883 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.883 { 00:21:45.883 "cntlid": 113, 00:21:45.883 "qid": 0, 00:21:45.883 "state": "enabled", 00:21:45.883 "thread": "nvmf_tgt_poll_group_000", 00:21:45.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:45.883 "listen_address": { 00:21:45.883 "trtype": "TCP", 00:21:45.883 "adrfam": "IPv4", 00:21:45.883 "traddr": "10.0.0.2", 00:21:45.883 "trsvcid": "4420" 00:21:45.883 }, 00:21:45.883 "peer_address": { 00:21:45.883 "trtype": "TCP", 00:21:45.883 "adrfam": "IPv4", 00:21:45.883 "traddr": "10.0.0.1", 00:21:45.883 "trsvcid": "51284" 00:21:45.883 }, 00:21:45.883 "auth": { 00:21:45.883 "state": "completed", 00:21:45.883 "digest": "sha512", 00:21:45.883 "dhgroup": "ffdhe3072" 00:21:45.883 } 00:21:45.883 } 00:21:45.883 ]' 00:21:45.883 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.883 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.883 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.883 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:45.883 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.883 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.883 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.883 03:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.142 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:21:46.142 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:21:46.709 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.709 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:46.709 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.709 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.709 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.709 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.709 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:46.709 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:46.969 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:46.969 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.969 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.969 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:46.969 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:46.969 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.969 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.969 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.969 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.969 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.969 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.969 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.969 03:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.227 00:21:47.227 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.227 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.227 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.487 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.487 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.487 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.487 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.487 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.487 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.487 { 00:21:47.487 "cntlid": 115, 00:21:47.487 "qid": 0, 00:21:47.487 "state": "enabled", 00:21:47.487 "thread": "nvmf_tgt_poll_group_000", 00:21:47.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:47.487 "listen_address": { 00:21:47.487 "trtype": "TCP", 00:21:47.487 "adrfam": "IPv4", 00:21:47.487 "traddr": "10.0.0.2", 00:21:47.487 "trsvcid": "4420" 00:21:47.487 }, 00:21:47.487 "peer_address": { 00:21:47.487 "trtype": "TCP", 00:21:47.487 "adrfam": "IPv4", 00:21:47.487 "traddr": "10.0.0.1", 00:21:47.487 "trsvcid": "51314" 00:21:47.487 }, 00:21:47.487 "auth": { 00:21:47.487 "state": "completed", 00:21:47.487 "digest": "sha512", 00:21:47.487 "dhgroup": "ffdhe3072" 00:21:47.487 } 00:21:47.487 } 00:21:47.487 ]' 00:21:47.487 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.487 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.487 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.487 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:47.487 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.487 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.487 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.487 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.746 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:21:47.746 03:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:21:48.314 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.314 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:48.314 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.314 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.314 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.314 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.314 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:48.314 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:48.573 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:48.573 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.573 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.573 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:48.573 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:48.573 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.573 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.573 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.573 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.573 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.573 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.573 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.573 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.832 00:21:48.832 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.832 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.832 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.091 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.091 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.091 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.091 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.091 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.091 03:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.091 { 00:21:49.091 "cntlid": 117, 00:21:49.091 "qid": 0, 00:21:49.091 "state": "enabled", 00:21:49.091 "thread": "nvmf_tgt_poll_group_000", 00:21:49.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:49.091 "listen_address": { 00:21:49.091 "trtype": "TCP", 00:21:49.091 "adrfam": "IPv4", 00:21:49.091 "traddr": "10.0.0.2", 00:21:49.091 "trsvcid": "4420" 00:21:49.091 }, 00:21:49.091 "peer_address": { 00:21:49.091 "trtype": "TCP", 00:21:49.091 "adrfam": "IPv4", 00:21:49.091 "traddr": "10.0.0.1", 00:21:49.091 "trsvcid": "51342" 00:21:49.091 }, 00:21:49.091 "auth": { 00:21:49.091 "state": "completed", 00:21:49.091 "digest": "sha512", 00:21:49.091 "dhgroup": "ffdhe3072" 00:21:49.091 } 00:21:49.091 } 00:21:49.091 ]' 00:21:49.091 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.091 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.091 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.091 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:49.091 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.092 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.092 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.092 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.350 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:21:49.350 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:21:49.918 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.918 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:49.918 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.918 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.918 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.918 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.918 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:49.918 03:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:50.176 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:50.176 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.176 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:50.176 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:50.176 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:50.176 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.176 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:50.176 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.176 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.176 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.176 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:50.176 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:50.176 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:50.434 00:21:50.434 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.434 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.434 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.692 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.692 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.692 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.692 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.692 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.692 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.692 { 00:21:50.692 "cntlid": 119, 00:21:50.692 "qid": 0, 00:21:50.692 "state": "enabled", 00:21:50.692 "thread": "nvmf_tgt_poll_group_000", 00:21:50.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:50.692 "listen_address": { 00:21:50.692 "trtype": "TCP", 00:21:50.692 "adrfam": "IPv4", 00:21:50.692 "traddr": "10.0.0.2", 00:21:50.692 "trsvcid": "4420" 00:21:50.692 }, 00:21:50.692 "peer_address": { 00:21:50.692 "trtype": "TCP", 00:21:50.692 "adrfam": "IPv4", 00:21:50.692 "traddr": "10.0.0.1", 00:21:50.692 "trsvcid": "51380" 00:21:50.692 }, 00:21:50.692 "auth": { 00:21:50.692 "state": "completed", 00:21:50.692 "digest": "sha512", 00:21:50.692 "dhgroup": "ffdhe3072" 00:21:50.692 } 00:21:50.692 } 00:21:50.692 ]' 00:21:50.692 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.692 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.692 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.692 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:50.692 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.693 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.693 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.693 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.951 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:21:50.951 03:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:21:51.519 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.519 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:51.519 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.519 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.519 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.519 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:51.519 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.519 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:51.519 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:51.778 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:51.778 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.778 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.778 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:51.778 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:51.778 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.778 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.778 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.778 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.778 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.778 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.778 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.778 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.044 00:21:52.044 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.044 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.044 03:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.301 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.301 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.301 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.301 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.301 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.301 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.301 { 00:21:52.301 "cntlid": 121, 00:21:52.301 "qid": 0, 00:21:52.301 "state": "enabled", 00:21:52.301 "thread": "nvmf_tgt_poll_group_000", 00:21:52.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:52.301 "listen_address": { 00:21:52.301 "trtype": "TCP", 00:21:52.301 "adrfam": "IPv4", 00:21:52.301 "traddr": "10.0.0.2", 00:21:52.301 "trsvcid": "4420" 00:21:52.301 }, 00:21:52.301 "peer_address": { 00:21:52.301 "trtype": "TCP", 00:21:52.301 "adrfam": "IPv4", 00:21:52.301 "traddr": "10.0.0.1", 00:21:52.301 "trsvcid": "51416" 00:21:52.301 }, 00:21:52.301 "auth": { 00:21:52.301 "state": "completed", 00:21:52.301 "digest": "sha512", 00:21:52.301 "dhgroup": "ffdhe4096" 00:21:52.301 } 00:21:52.301 } 00:21:52.301 ]' 00:21:52.301 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.301 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.301 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.301 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:52.301 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.302 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.302 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.302 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.560 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:21:52.560 03:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:21:53.128 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.128 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:53.128 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.128 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.128 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.128 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.128 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:53.128 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:53.387 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:53.387 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.387 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.387 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:53.387 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:53.387 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.388 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.388 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.388 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.388 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.388 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.388 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.388 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.646 00:21:53.646 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.646 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.646 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.647 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.647 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.647 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.647 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.647 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.647 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.647 { 00:21:53.647 "cntlid": 123, 00:21:53.647 "qid": 0, 00:21:53.647 "state": "enabled", 00:21:53.647 "thread": "nvmf_tgt_poll_group_000", 00:21:53.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:53.647 "listen_address": { 00:21:53.647 "trtype": "TCP", 00:21:53.647 "adrfam": "IPv4", 00:21:53.647 "traddr": "10.0.0.2", 00:21:53.647 "trsvcid": "4420" 00:21:53.647 }, 00:21:53.647 "peer_address": { 00:21:53.647 "trtype": "TCP", 00:21:53.647 "adrfam": "IPv4", 00:21:53.647 "traddr": "10.0.0.1", 00:21:53.647 "trsvcid": "51438" 00:21:53.647 }, 00:21:53.647 "auth": { 00:21:53.647 "state": "completed", 00:21:53.647 "digest": "sha512", 00:21:53.647 "dhgroup": "ffdhe4096" 00:21:53.647 } 00:21:53.647 } 00:21:53.647 ]' 00:21:53.647 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.994 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.994 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.994 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:53.994 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.994 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.994 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.994 03:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.994 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:21:53.994 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:21:54.561 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.561 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:54.561 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.561 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.561 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.561 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.561 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:54.561 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:54.820 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:54.820 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.820 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:54.820 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:54.820 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:54.820 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.820 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.820 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.820 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.820 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.820 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.820 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.820 03:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.079 00:21:55.079 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.079 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.079 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.337 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.337 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.337 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.337 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.338 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.338 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.338 { 00:21:55.338 "cntlid": 125, 00:21:55.338 "qid": 0, 00:21:55.338 "state": "enabled", 00:21:55.338 "thread": "nvmf_tgt_poll_group_000", 00:21:55.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:55.338 "listen_address": { 00:21:55.338 "trtype": "TCP", 00:21:55.338 "adrfam": "IPv4", 00:21:55.338 "traddr": "10.0.0.2", 00:21:55.338 "trsvcid": "4420" 00:21:55.338 }, 00:21:55.338 "peer_address": { 00:21:55.338 "trtype": "TCP", 00:21:55.338 "adrfam": "IPv4", 00:21:55.338 "traddr": "10.0.0.1", 00:21:55.338 "trsvcid": "51450" 00:21:55.338 }, 00:21:55.338 "auth": { 00:21:55.338 "state": "completed", 00:21:55.338 "digest": "sha512", 00:21:55.338 "dhgroup": "ffdhe4096" 00:21:55.338 } 00:21:55.338 } 00:21:55.338 ]' 00:21:55.338 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.338 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.338 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.338 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:55.338 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.596 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.596 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.596 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.596 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:21:55.596 03:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:21:56.164 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.164 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:56.164 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.164 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.164 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.164 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.164 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:56.164 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:56.423 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:56.423 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.423 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:56.423 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:56.423 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:56.423 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.423 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:56.423 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.423 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.423 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.423 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:56.423 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:56.423 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:56.682 00:21:56.682 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.682 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.682 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.940 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.940 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.940 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.940 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.940 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.940 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.940 { 00:21:56.940 "cntlid": 127, 00:21:56.940 "qid": 0, 00:21:56.940 "state": "enabled", 00:21:56.940 "thread": "nvmf_tgt_poll_group_000", 00:21:56.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:56.940 "listen_address": { 00:21:56.940 "trtype": "TCP", 00:21:56.940 "adrfam": "IPv4", 00:21:56.940 "traddr": "10.0.0.2", 00:21:56.940 "trsvcid": "4420" 00:21:56.940 }, 00:21:56.940 "peer_address": { 00:21:56.940 "trtype": "TCP", 00:21:56.940 "adrfam": "IPv4", 00:21:56.940 "traddr": "10.0.0.1", 00:21:56.940 "trsvcid": "51820" 00:21:56.940 }, 00:21:56.940 "auth": { 00:21:56.940 "state": "completed", 00:21:56.940 "digest": "sha512", 00:21:56.940 "dhgroup": "ffdhe4096" 00:21:56.940 } 00:21:56.940 } 00:21:56.940 ]' 00:21:56.940 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.940 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.940 03:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.940 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:56.940 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.940 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.940 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.940 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.199 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:21:57.199 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:21:57.766 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.766 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:57.766 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.766 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.766 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.766 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:57.766 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.766 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:57.766 03:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:58.025 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:58.025 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.025 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:58.025 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:58.025 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:58.025 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.025 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.025 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.025 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.025 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.025 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.025 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.025 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.283 00:21:58.283 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.283 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.284 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.542 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.542 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.542 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.542 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.542 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.542 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.542 { 00:21:58.542 "cntlid": 129, 00:21:58.542 "qid": 0, 00:21:58.542 "state": "enabled", 00:21:58.542 "thread": "nvmf_tgt_poll_group_000", 00:21:58.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:58.542 "listen_address": { 00:21:58.542 "trtype": "TCP", 00:21:58.542 "adrfam": "IPv4", 00:21:58.542 "traddr": "10.0.0.2", 00:21:58.542 "trsvcid": "4420" 00:21:58.542 }, 00:21:58.542 "peer_address": { 00:21:58.542 "trtype": "TCP", 00:21:58.542 "adrfam": "IPv4", 00:21:58.542 "traddr": "10.0.0.1", 00:21:58.542 "trsvcid": "51840" 00:21:58.542 }, 00:21:58.542 "auth": { 00:21:58.542 "state": "completed", 00:21:58.542 "digest": "sha512", 00:21:58.542 "dhgroup": "ffdhe6144" 00:21:58.542 } 00:21:58.542 } 00:21:58.542 ]' 00:21:58.542 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.542 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.542 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.800 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:58.800 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.800 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.800 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.800 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.058 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:21:59.058 03:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:21:59.625 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.625 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:59.625 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.625 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.625 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.625 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.625 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:59.625 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:59.625 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:59.625 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.625 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:59.625 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:59.625 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:59.625 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.625 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.625 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.625 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.625 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.625 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.625 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.625 03:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.192 00:22:00.192 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.192 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.192 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.192 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.192 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.192 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.192 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.192 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.192 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.192 { 00:22:00.192 "cntlid": 131, 00:22:00.192 "qid": 0, 00:22:00.192 "state": "enabled", 00:22:00.192 "thread": "nvmf_tgt_poll_group_000", 00:22:00.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:00.192 "listen_address": { 00:22:00.192 "trtype": "TCP", 00:22:00.192 "adrfam": "IPv4", 00:22:00.192 "traddr": "10.0.0.2", 00:22:00.192 "trsvcid": "4420" 00:22:00.192 }, 00:22:00.192 "peer_address": { 00:22:00.192 "trtype": "TCP", 00:22:00.192 "adrfam": "IPv4", 00:22:00.192 "traddr": "10.0.0.1", 00:22:00.192 "trsvcid": "51862" 00:22:00.192 }, 00:22:00.192 "auth": { 00:22:00.192 "state": "completed", 00:22:00.192 "digest": "sha512", 00:22:00.192 "dhgroup": "ffdhe6144" 00:22:00.192 } 00:22:00.192 } 00:22:00.192 ]' 00:22:00.192 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.449 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.449 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.449 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:00.449 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.449 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.449 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.449 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.707 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:22:00.707 03:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:22:01.275 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.275 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:01.275 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.275 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.275 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.275 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.275 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:01.275 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:01.275 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:01.275 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.275 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:01.275 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:01.275 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:01.275 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.275 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.275 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.275 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.275 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.275 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.275 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.275 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.842 00:22:01.842 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.842 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.842 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.842 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.842 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.842 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.842 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.842 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.842 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.842 { 00:22:01.842 "cntlid": 133, 00:22:01.842 "qid": 0, 00:22:01.842 "state": "enabled", 00:22:01.842 "thread": "nvmf_tgt_poll_group_000", 00:22:01.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:01.842 "listen_address": { 00:22:01.842 "trtype": "TCP", 00:22:01.842 "adrfam": "IPv4", 00:22:01.842 "traddr": "10.0.0.2", 00:22:01.842 "trsvcid": "4420" 00:22:01.842 }, 00:22:01.842 "peer_address": { 00:22:01.842 "trtype": "TCP", 00:22:01.842 "adrfam": "IPv4", 00:22:01.842 "traddr": "10.0.0.1", 00:22:01.842 "trsvcid": "51896" 00:22:01.842 }, 00:22:01.842 "auth": { 00:22:01.842 "state": "completed", 00:22:01.842 "digest": "sha512", 00:22:01.842 "dhgroup": "ffdhe6144" 00:22:01.842 } 00:22:01.842 } 00:22:01.842 ]' 00:22:01.842 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.842 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.842 03:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.100 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:02.100 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.100 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.100 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.100 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.359 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:22:02.359 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:22:02.926 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.926 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:02.926 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.926 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.926 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.926 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.926 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:02.926 03:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:02.926 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:02.926 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.926 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.926 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:02.926 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:02.926 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.926 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:22:02.926 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.926 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.926 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.926 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:02.926 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.926 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:03.494 00:22:03.494 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.494 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.494 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.494 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.494 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.494 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.494 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.494 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.494 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.494 { 00:22:03.494 "cntlid": 135, 00:22:03.494 "qid": 0, 00:22:03.494 "state": "enabled", 00:22:03.494 "thread": "nvmf_tgt_poll_group_000", 00:22:03.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:03.494 "listen_address": { 00:22:03.494 "trtype": "TCP", 00:22:03.494 "adrfam": "IPv4", 00:22:03.494 "traddr": "10.0.0.2", 00:22:03.494 "trsvcid": "4420" 00:22:03.494 }, 00:22:03.494 "peer_address": { 00:22:03.494 "trtype": "TCP", 00:22:03.494 "adrfam": "IPv4", 00:22:03.494 "traddr": "10.0.0.1", 00:22:03.494 "trsvcid": "51926" 00:22:03.494 }, 00:22:03.494 "auth": { 00:22:03.494 "state": "completed", 00:22:03.494 "digest": "sha512", 00:22:03.494 "dhgroup": "ffdhe6144" 00:22:03.494 } 00:22:03.494 } 00:22:03.494 ]' 00:22:03.494 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.752 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.752 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.752 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:03.752 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.752 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.752 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.752 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.011 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:22:04.011 03:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:22:04.577 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.577 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:04.577 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.577 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.577 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.577 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:04.577 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.577 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:04.577 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:04.577 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:04.577 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.577 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:04.577 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:04.577 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:04.577 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.578 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.578 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.578 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.578 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.578 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.578 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.578 03:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.144 00:22:05.144 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.144 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.144 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.402 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.402 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.402 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.402 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.402 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.402 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:05.402 { 00:22:05.402 "cntlid": 137, 00:22:05.402 "qid": 0, 00:22:05.402 "state": "enabled", 00:22:05.402 "thread": "nvmf_tgt_poll_group_000", 00:22:05.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:05.402 "listen_address": { 00:22:05.402 "trtype": "TCP", 00:22:05.402 "adrfam": "IPv4", 00:22:05.402 "traddr": "10.0.0.2", 00:22:05.402 "trsvcid": "4420" 00:22:05.402 }, 00:22:05.402 "peer_address": { 00:22:05.402 "trtype": "TCP", 00:22:05.402 "adrfam": "IPv4", 00:22:05.402 "traddr": "10.0.0.1", 00:22:05.402 "trsvcid": "51950" 00:22:05.402 }, 00:22:05.402 "auth": { 00:22:05.402 "state": "completed", 00:22:05.402 "digest": "sha512", 00:22:05.402 "dhgroup": "ffdhe8192" 00:22:05.402 } 00:22:05.402 } 00:22:05.402 ]' 00:22:05.402 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:05.402 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:05.402 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:05.402 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:05.402 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:05.402 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.402 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.402 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.661 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:22:05.661 03:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:22:06.229 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.229 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:06.229 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.229 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.229 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.229 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:06.229 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:06.229 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:06.488 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:06.488 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.488 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:06.488 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:06.488 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:06.488 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.488 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.488 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.488 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.488 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.488 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.488 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.488 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.055 00:22:07.055 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.055 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.055 03:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.055 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.055 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.055 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.055 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.055 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.055 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.055 { 00:22:07.055 "cntlid": 139, 00:22:07.055 "qid": 0, 00:22:07.055 "state": "enabled", 00:22:07.055 "thread": "nvmf_tgt_poll_group_000", 00:22:07.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:07.055 "listen_address": { 00:22:07.055 "trtype": "TCP", 00:22:07.055 "adrfam": "IPv4", 00:22:07.055 "traddr": "10.0.0.2", 00:22:07.055 "trsvcid": "4420" 00:22:07.055 }, 00:22:07.055 "peer_address": { 00:22:07.055 "trtype": "TCP", 00:22:07.055 "adrfam": "IPv4", 00:22:07.055 "traddr": "10.0.0.1", 00:22:07.055 "trsvcid": "42498" 00:22:07.055 }, 00:22:07.055 "auth": { 00:22:07.055 "state": "completed", 00:22:07.055 "digest": "sha512", 00:22:07.055 "dhgroup": "ffdhe8192" 00:22:07.055 } 00:22:07.055 } 00:22:07.055 ]' 00:22:07.055 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.055 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.055 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.314 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:07.314 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.314 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.314 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.314 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.572 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:22:07.572 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: --dhchap-ctrl-secret DHHC-1:02:M2YyMzViYTA5M2Q2NWQzNmEzOGI2ZmE5OWY2NjQyZDQ3MWExOTU4MjIwZjg0ZDk46iaH5w==: 00:22:08.140 03:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.140 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:08.140 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.140 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.140 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.140 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:08.140 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:08.140 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:08.140 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:08.140 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.140 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:08.140 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:08.140 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:08.140 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.140 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.140 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.140 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.140 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.140 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.140 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.140 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.708 00:22:08.708 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.708 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.708 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.966 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.966 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.967 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.967 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.967 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.967 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.967 { 00:22:08.967 "cntlid": 141, 00:22:08.967 "qid": 0, 00:22:08.967 "state": "enabled", 00:22:08.967 "thread": "nvmf_tgt_poll_group_000", 00:22:08.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:08.967 "listen_address": { 00:22:08.967 "trtype": "TCP", 00:22:08.967 "adrfam": "IPv4", 00:22:08.967 "traddr": "10.0.0.2", 00:22:08.967 "trsvcid": "4420" 00:22:08.967 }, 00:22:08.967 "peer_address": { 00:22:08.967 "trtype": "TCP", 00:22:08.967 "adrfam": "IPv4", 00:22:08.967 "traddr": "10.0.0.1", 00:22:08.967 "trsvcid": "42520" 00:22:08.967 }, 00:22:08.967 "auth": { 00:22:08.967 "state": "completed", 00:22:08.967 "digest": "sha512", 00:22:08.967 "dhgroup": "ffdhe8192" 00:22:08.967 } 00:22:08.967 } 00:22:08.967 ]' 00:22:08.967 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.967 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.967 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.967 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:08.967 03:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.967 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.967 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.967 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.225 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:22:09.225 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:01:ZTIyZTRkZGYwNjQyZWQ0ZjM4NTk1ZTAzNTYwOTI5ZDOjmhjN: 00:22:09.793 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.793 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:09.793 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.793 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.793 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.793 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.793 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:09.793 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:10.052 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:10.052 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.052 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:10.052 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:10.052 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:10.052 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.052 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:22:10.052 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.052 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.052 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.052 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:10.052 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:10.052 03:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:10.619 00:22:10.619 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.619 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.619 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.619 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.619 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.619 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.619 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.619 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.619 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.619 { 00:22:10.619 "cntlid": 143, 00:22:10.619 "qid": 0, 00:22:10.619 "state": "enabled", 00:22:10.619 "thread": "nvmf_tgt_poll_group_000", 00:22:10.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:10.619 "listen_address": { 00:22:10.619 "trtype": "TCP", 00:22:10.619 "adrfam": "IPv4", 00:22:10.619 "traddr": "10.0.0.2", 00:22:10.619 "trsvcid": "4420" 00:22:10.619 }, 00:22:10.619 "peer_address": { 00:22:10.619 "trtype": "TCP", 00:22:10.619 "adrfam": "IPv4", 00:22:10.619 "traddr": "10.0.0.1", 00:22:10.619 "trsvcid": "42542" 00:22:10.619 }, 00:22:10.619 "auth": { 00:22:10.619 "state": "completed", 00:22:10.619 "digest": "sha512", 00:22:10.619 "dhgroup": "ffdhe8192" 00:22:10.619 } 00:22:10.619 } 00:22:10.619 ]' 00:22:10.619 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.619 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.619 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.878 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:10.878 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.878 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.878 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.878 03:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.137 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:22:11.137 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:22:11.703 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.704 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:11.704 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.704 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.704 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.704 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:11.704 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:11.704 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:11.704 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:11.704 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:11.704 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:11.704 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:11.704 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.704 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:11.704 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:11.704 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:11.704 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.704 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.704 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.704 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.704 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.704 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.704 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.704 03:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.273 00:22:12.273 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.273 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.273 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.532 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.532 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.532 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.532 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.532 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.532 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.532 { 00:22:12.532 "cntlid": 145, 00:22:12.532 "qid": 0, 00:22:12.532 "state": "enabled", 00:22:12.532 "thread": "nvmf_tgt_poll_group_000", 00:22:12.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:12.532 "listen_address": { 00:22:12.532 "trtype": "TCP", 00:22:12.532 "adrfam": "IPv4", 00:22:12.532 "traddr": "10.0.0.2", 00:22:12.532 "trsvcid": "4420" 00:22:12.532 }, 00:22:12.532 "peer_address": { 00:22:12.532 "trtype": "TCP", 00:22:12.532 "adrfam": "IPv4", 00:22:12.532 "traddr": "10.0.0.1", 00:22:12.532 "trsvcid": "42568" 00:22:12.532 }, 00:22:12.532 "auth": { 00:22:12.532 "state": "completed", 00:22:12.532 "digest": "sha512", 00:22:12.532 "dhgroup": "ffdhe8192" 00:22:12.532 } 00:22:12.532 } 00:22:12.532 ]' 00:22:12.532 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.532 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.532 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:12.532 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:12.532 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:12.532 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.532 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.532 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.791 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:22:12.791 03:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:M2YyMGM3NjBmZTdmZmYxNmEyYWZjODMxMDk5ZDQyOTBlNjQ5NzljYWYwNmM0MmVm//LP3g==: --dhchap-ctrl-secret DHHC-1:03:NzcxZmIzMzQ0MGQ4ZjhlM2QxOGZiZWM5ODQyMzk1Y2JkM2NjZTAzNzdiZDRmNmM4NjhiZDkyOTQwNmQ4MWM5MG+74TQ=: 00:22:13.359 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.359 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:13.359 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.359 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.359 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.359 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:22:13.359 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.359 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.359 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.359 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:13.359 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:13.359 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:13.359 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:13.359 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.359 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:13.359 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.359 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:13.359 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:13.359 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:13.927 request: 00:22:13.927 { 00:22:13.927 "name": "nvme0", 00:22:13.927 "trtype": "tcp", 00:22:13.927 "traddr": "10.0.0.2", 00:22:13.927 "adrfam": "ipv4", 00:22:13.927 "trsvcid": "4420", 00:22:13.927 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:13.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:13.927 "prchk_reftag": false, 00:22:13.927 "prchk_guard": false, 00:22:13.927 "hdgst": false, 00:22:13.927 "ddgst": false, 00:22:13.927 "dhchap_key": "key2", 00:22:13.927 "allow_unrecognized_csi": false, 00:22:13.927 "method": "bdev_nvme_attach_controller", 00:22:13.927 "req_id": 1 00:22:13.927 } 00:22:13.927 Got JSON-RPC error response 00:22:13.927 response: 00:22:13.927 { 00:22:13.927 "code": -5, 00:22:13.927 "message": "Input/output error" 00:22:13.927 } 00:22:13.927 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:13.927 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:13.927 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:13.927 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:13.927 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:13.927 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.927 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.927 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.927 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.927 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.927 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.927 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.927 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:13.927 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:13.927 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:13.927 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:13.927 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.927 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:13.927 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.927 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:13.927 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:13.927 03:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:14.186 request: 00:22:14.186 { 00:22:14.186 "name": "nvme0", 00:22:14.186 "trtype": "tcp", 00:22:14.186 "traddr": "10.0.0.2", 00:22:14.186 "adrfam": "ipv4", 00:22:14.186 "trsvcid": "4420", 00:22:14.186 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:14.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:14.186 "prchk_reftag": false, 00:22:14.186 "prchk_guard": false, 00:22:14.186 "hdgst": false, 00:22:14.186 "ddgst": false, 00:22:14.186 "dhchap_key": "key1", 00:22:14.186 "dhchap_ctrlr_key": "ckey2", 00:22:14.186 "allow_unrecognized_csi": false, 00:22:14.186 "method": "bdev_nvme_attach_controller", 00:22:14.186 "req_id": 1 00:22:14.186 } 00:22:14.186 Got JSON-RPC error response 00:22:14.186 response: 00:22:14.186 { 00:22:14.186 "code": -5, 00:22:14.186 "message": "Input/output error" 00:22:14.186 } 00:22:14.186 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:14.186 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:14.186 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:14.186 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:14.186 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:14.186 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.186 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.186 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.186 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:22:14.186 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.186 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.186 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.445 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.445 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:14.445 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.445 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:14.445 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.445 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:14.445 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.445 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.445 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.445 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.704 request: 00:22:14.704 { 00:22:14.704 "name": "nvme0", 00:22:14.704 "trtype": "tcp", 00:22:14.704 "traddr": "10.0.0.2", 00:22:14.704 "adrfam": "ipv4", 00:22:14.704 "trsvcid": "4420", 00:22:14.704 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:14.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:14.704 "prchk_reftag": false, 00:22:14.704 "prchk_guard": false, 00:22:14.704 "hdgst": false, 00:22:14.704 "ddgst": false, 00:22:14.704 "dhchap_key": "key1", 00:22:14.704 "dhchap_ctrlr_key": "ckey1", 00:22:14.704 "allow_unrecognized_csi": false, 00:22:14.704 "method": "bdev_nvme_attach_controller", 00:22:14.704 "req_id": 1 00:22:14.704 } 00:22:14.704 Got JSON-RPC error response 00:22:14.704 response: 00:22:14.704 { 00:22:14.704 "code": -5, 00:22:14.704 "message": "Input/output error" 00:22:14.704 } 00:22:14.704 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:14.704 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:14.704 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:14.704 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:14.704 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:14.704 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.704 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.704 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.704 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 308280 00:22:14.704 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 308280 ']' 00:22:14.704 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 308280 00:22:14.704 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:14.704 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.704 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 308280 00:22:14.704 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:14.704 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:14.704 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 308280' 00:22:14.704 killing process with pid 308280 00:22:14.704 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 308280 00:22:14.963 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 308280 00:22:14.963 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:14.963 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:14.963 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:14.963 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.963 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=311697 00:22:14.963 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:14.963 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 311697 00:22:14.963 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 311697 ']' 00:22:14.963 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.963 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:14.963 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.963 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:14.963 03:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.222 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:15.222 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:15.222 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:15.222 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:15.222 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.222 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.222 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:15.222 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 311697 00:22:15.222 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 311697 ']' 00:22:15.222 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.222 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:15.222 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.222 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:15.222 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.481 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:15.481 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:15.481 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:15.481 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.482 null0 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wwy 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.T0a ]] 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.T0a 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.14D 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Rbb ]] 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Rbb 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.n1o 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.HF0 ]] 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HF0 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Lul 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.482 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.740 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.740 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:15.740 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:15.740 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.740 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:15.740 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:15.741 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:15.741 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.741 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:22:15.741 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.741 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.741 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.741 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:15.741 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:15.741 03:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:16.308 nvme0n1 00:22:16.308 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.308 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.308 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.566 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.566 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.566 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.566 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.566 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.566 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.566 { 00:22:16.566 "cntlid": 1, 00:22:16.566 "qid": 0, 00:22:16.566 "state": "enabled", 00:22:16.566 "thread": "nvmf_tgt_poll_group_000", 00:22:16.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:16.566 "listen_address": { 00:22:16.566 "trtype": "TCP", 00:22:16.566 "adrfam": "IPv4", 00:22:16.566 "traddr": "10.0.0.2", 00:22:16.566 "trsvcid": "4420" 00:22:16.566 }, 00:22:16.566 "peer_address": { 00:22:16.566 "trtype": "TCP", 00:22:16.566 "adrfam": "IPv4", 00:22:16.566 "traddr": "10.0.0.1", 00:22:16.566 "trsvcid": "52580" 00:22:16.566 }, 00:22:16.566 "auth": { 00:22:16.566 "state": "completed", 00:22:16.566 "digest": "sha512", 00:22:16.566 "dhgroup": "ffdhe8192" 00:22:16.566 } 00:22:16.566 } 00:22:16.566 ]' 00:22:16.566 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.566 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.566 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:16.566 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:16.566 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:16.825 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.825 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.825 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.825 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:22:16.825 03:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:22:17.392 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.392 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:17.392 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.392 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.392 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.392 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:22:17.392 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.392 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.392 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.392 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:17.392 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:17.651 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:17.651 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:17.651 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:17.651 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:17.651 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.651 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:17.652 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.652 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:17.652 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:17.652 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:17.911 request: 00:22:17.911 { 00:22:17.911 "name": "nvme0", 00:22:17.911 "trtype": "tcp", 00:22:17.911 "traddr": "10.0.0.2", 00:22:17.911 "adrfam": "ipv4", 00:22:17.911 "trsvcid": "4420", 00:22:17.911 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:17.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:17.911 "prchk_reftag": false, 00:22:17.911 "prchk_guard": false, 00:22:17.911 "hdgst": false, 00:22:17.911 "ddgst": false, 00:22:17.911 "dhchap_key": "key3", 00:22:17.911 "allow_unrecognized_csi": false, 00:22:17.911 "method": "bdev_nvme_attach_controller", 00:22:17.911 "req_id": 1 00:22:17.911 } 00:22:17.911 Got JSON-RPC error response 00:22:17.911 response: 00:22:17.911 { 00:22:17.911 "code": -5, 00:22:17.911 "message": "Input/output error" 00:22:17.911 } 00:22:17.911 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:17.911 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:17.911 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:17.911 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:17.911 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:17.911 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:17.911 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:17.911 03:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:18.170 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:18.170 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:18.170 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:18.170 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:18.170 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:18.170 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:18.170 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:18.170 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:18.170 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:18.170 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:18.170 request: 00:22:18.170 { 00:22:18.170 "name": "nvme0", 00:22:18.170 "trtype": "tcp", 00:22:18.170 "traddr": "10.0.0.2", 00:22:18.170 "adrfam": "ipv4", 00:22:18.170 "trsvcid": "4420", 00:22:18.170 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:18.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:18.170 "prchk_reftag": false, 00:22:18.170 "prchk_guard": false, 00:22:18.170 "hdgst": false, 00:22:18.170 "ddgst": false, 00:22:18.170 "dhchap_key": "key3", 00:22:18.170 "allow_unrecognized_csi": false, 00:22:18.170 "method": "bdev_nvme_attach_controller", 00:22:18.170 "req_id": 1 00:22:18.170 } 00:22:18.170 Got JSON-RPC error response 00:22:18.170 response: 00:22:18.170 { 00:22:18.170 "code": -5, 00:22:18.170 "message": "Input/output error" 00:22:18.170 } 00:22:18.170 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:18.170 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:18.170 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:18.170 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:18.170 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:18.170 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:18.170 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:18.170 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:18.170 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:18.170 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:18.429 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:18.429 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.429 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.429 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.429 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:18.429 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.429 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.429 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.429 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:18.429 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:18.429 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:18.429 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:18.429 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:18.429 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:18.429 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:18.429 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:18.429 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:18.429 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:18.688 request: 00:22:18.688 { 00:22:18.688 "name": "nvme0", 00:22:18.688 "trtype": "tcp", 00:22:18.688 "traddr": "10.0.0.2", 00:22:18.688 "adrfam": "ipv4", 00:22:18.688 "trsvcid": "4420", 00:22:18.688 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:18.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:18.688 "prchk_reftag": false, 00:22:18.688 "prchk_guard": false, 00:22:18.688 "hdgst": false, 00:22:18.688 "ddgst": false, 00:22:18.688 "dhchap_key": "key0", 00:22:18.688 "dhchap_ctrlr_key": "key1", 00:22:18.688 "allow_unrecognized_csi": false, 00:22:18.688 "method": "bdev_nvme_attach_controller", 00:22:18.688 "req_id": 1 00:22:18.688 } 00:22:18.688 Got JSON-RPC error response 00:22:18.688 response: 00:22:18.688 { 00:22:18.688 "code": -5, 00:22:18.688 "message": "Input/output error" 00:22:18.688 } 00:22:18.946 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:18.946 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:18.946 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:18.946 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:18.946 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:18.947 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:18.947 03:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:18.947 nvme0n1 00:22:19.206 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:19.206 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:19.206 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.206 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.206 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.206 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.465 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:22:19.465 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.465 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.465 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.465 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:19.465 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:19.465 03:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:20.400 nvme0n1 00:22:20.400 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:20.400 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:20.400 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.400 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.400 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:20.400 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.400 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.400 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.400 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:20.400 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.400 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:20.659 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.659 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:22:20.659 03:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: --dhchap-ctrl-secret DHHC-1:03:MTZhZmM0MWY1OWJjMjI2NTVkMDFhYjU2YzdhNjViMDUwMjU2MDFhNmVkZWIxYWU1OTlhM2QxMjIzNjk4ZGRiZW98uck=: 00:22:21.226 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:21.226 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:21.226 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:21.226 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:21.226 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:21.226 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:21.226 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:21.226 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.226 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.484 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:21.484 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:21.484 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:21.484 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:21.484 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:21.484 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:21.484 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:21.484 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:21.484 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:21.484 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:21.743 request: 00:22:21.743 { 00:22:21.743 "name": "nvme0", 00:22:21.743 "trtype": "tcp", 00:22:21.743 "traddr": "10.0.0.2", 00:22:21.743 "adrfam": "ipv4", 00:22:21.743 "trsvcid": "4420", 00:22:21.743 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:21.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:21.743 "prchk_reftag": false, 00:22:21.743 "prchk_guard": false, 00:22:21.743 "hdgst": false, 00:22:21.743 "ddgst": false, 00:22:21.743 "dhchap_key": "key1", 00:22:21.743 "allow_unrecognized_csi": false, 00:22:21.743 "method": "bdev_nvme_attach_controller", 00:22:21.743 "req_id": 1 00:22:21.743 } 00:22:21.743 Got JSON-RPC error response 00:22:21.743 response: 00:22:21.743 { 00:22:21.743 "code": -5, 00:22:21.743 "message": "Input/output error" 00:22:21.743 } 00:22:21.743 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:21.743 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:21.743 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:21.743 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:21.743 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:21.743 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:21.743 03:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:22.678 nvme0n1 00:22:22.678 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:22.678 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:22.678 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.678 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.678 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.678 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.936 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:22.936 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.936 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.936 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.936 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:22.936 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:22.936 03:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:23.194 nvme0n1 00:22:23.194 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:23.194 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:23.194 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.452 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.452 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.452 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.710 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:23.710 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.710 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.710 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.710 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: '' 2s 00:22:23.710 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:23.710 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:23.710 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: 00:22:23.710 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:23.710 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:23.710 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:23.710 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: ]] 00:22:23.710 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZjY4ZDhlNGVhNGVhYTM5OTgzMzE2MDVjNGRhMDE1ZTS2ZoBg: 00:22:23.710 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:23.710 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:23.710 03:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:25.610 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:25.610 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:25.610 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:25.610 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:25.610 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:25.610 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:25.610 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:25.610 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:25.610 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.610 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.610 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.610 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: 2s 00:22:25.610 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:25.610 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:25.610 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:25.610 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: 00:22:25.610 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:25.610 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:25.610 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:25.610 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: ]] 00:22:25.610 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Zjg4NzBiZjcwMGU2NDRlOWRkM2ZmNTRkMjAxN2FmYzBmYmQ1OTk5ZTY3NDBkMzZjpwh2lg==: 00:22:25.610 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:25.610 03:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:28.137 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:28.138 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:28.138 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:28.138 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:28.138 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:28.138 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:28.138 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:28.138 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.138 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:28.138 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.138 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.138 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.138 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:28.138 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:28.138 03:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:28.395 nvme0n1 00:22:28.395 03:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:28.395 03:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.395 03:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.396 03:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.396 03:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:28.396 03:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:28.961 03:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:28.961 03:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:28.961 03:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.219 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.219 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:29.219 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.219 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.219 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.219 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:29.219 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:29.477 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:29.477 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:29.477 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.477 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.477 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:29.477 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.477 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.477 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.477 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:29.477 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:29.477 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:29.477 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:29.477 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.477 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:29.477 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.477 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:29.477 03:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:30.044 request: 00:22:30.044 { 00:22:30.045 "name": "nvme0", 00:22:30.045 "dhchap_key": "key1", 00:22:30.045 "dhchap_ctrlr_key": "key3", 00:22:30.045 "method": "bdev_nvme_set_keys", 00:22:30.045 "req_id": 1 00:22:30.045 } 00:22:30.045 Got JSON-RPC error response 00:22:30.045 response: 00:22:30.045 { 00:22:30.045 "code": -13, 00:22:30.045 "message": "Permission denied" 00:22:30.045 } 00:22:30.045 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:30.045 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:30.045 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:30.045 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:30.045 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:30.045 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:30.045 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.303 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:30.303 03:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:31.235 03:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:31.235 03:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:31.235 03:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.494 03:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:31.494 03:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:31.494 03:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.494 03:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.494 03:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.494 03:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:31.494 03:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:31.494 03:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:32.058 nvme0n1 00:22:32.058 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:32.058 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.058 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.058 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.058 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:32.058 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:32.058 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:32.058 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:32.058 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:32.058 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:32.058 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:32.058 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:32.058 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:32.624 request: 00:22:32.624 { 00:22:32.624 "name": "nvme0", 00:22:32.624 "dhchap_key": "key2", 00:22:32.624 "dhchap_ctrlr_key": "key0", 00:22:32.624 "method": "bdev_nvme_set_keys", 00:22:32.624 "req_id": 1 00:22:32.624 } 00:22:32.624 Got JSON-RPC error response 00:22:32.624 response: 00:22:32.624 { 00:22:32.624 "code": -13, 00:22:32.624 "message": "Permission denied" 00:22:32.624 } 00:22:32.624 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:32.624 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:32.624 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:32.624 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:32.624 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:32.624 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:32.624 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.882 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:32.882 03:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:33.815 03:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:33.815 03:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:33.815 03:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.073 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:34.073 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:34.073 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:34.073 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 308308 00:22:34.074 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 308308 ']' 00:22:34.074 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 308308 00:22:34.074 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:34.074 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:34.074 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 308308 00:22:34.074 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:34.074 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:34.074 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 308308' 00:22:34.074 killing process with pid 308308 00:22:34.074 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 308308 00:22:34.074 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 308308 00:22:34.332 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:34.332 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:34.332 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:34.332 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:34.332 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:34.332 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:34.332 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:34.332 rmmod nvme_tcp 00:22:34.332 rmmod nvme_fabrics 00:22:34.332 rmmod nvme_keyring 00:22:34.332 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:34.332 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:34.332 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:34.332 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 311697 ']' 00:22:34.332 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 311697 00:22:34.332 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 311697 ']' 00:22:34.332 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 311697 00:22:34.332 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:34.332 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:34.332 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 311697 00:22:34.592 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:34.592 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:34.592 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 311697' 00:22:34.592 killing process with pid 311697 00:22:34.592 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 311697 00:22:34.592 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 311697 00:22:34.592 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:34.592 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:34.592 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:34.592 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:34.592 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:34.592 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:34.592 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:34.592 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:34.592 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:34.592 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.592 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.592 03:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.129 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:37.129 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.wwy /tmp/spdk.key-sha256.14D /tmp/spdk.key-sha384.n1o /tmp/spdk.key-sha512.Lul /tmp/spdk.key-sha512.T0a /tmp/spdk.key-sha384.Rbb /tmp/spdk.key-sha256.HF0 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:37.129 00:22:37.129 real 2m33.264s 00:22:37.129 user 5m52.783s 00:22:37.129 sys 0m23.962s 00:22:37.129 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.130 ************************************ 00:22:37.130 END TEST nvmf_auth_target 00:22:37.130 ************************************ 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:37.130 ************************************ 00:22:37.130 START TEST nvmf_bdevio_no_huge 00:22:37.130 ************************************ 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:37.130 * Looking for test storage... 00:22:37.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:37.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.130 --rc genhtml_branch_coverage=1 00:22:37.130 --rc genhtml_function_coverage=1 00:22:37.130 --rc genhtml_legend=1 00:22:37.130 --rc geninfo_all_blocks=1 00:22:37.130 --rc geninfo_unexecuted_blocks=1 00:22:37.130 00:22:37.130 ' 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:37.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.130 --rc genhtml_branch_coverage=1 00:22:37.130 --rc genhtml_function_coverage=1 00:22:37.130 --rc genhtml_legend=1 00:22:37.130 --rc geninfo_all_blocks=1 00:22:37.130 --rc geninfo_unexecuted_blocks=1 00:22:37.130 00:22:37.130 ' 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:37.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.130 --rc genhtml_branch_coverage=1 00:22:37.130 --rc genhtml_function_coverage=1 00:22:37.130 --rc genhtml_legend=1 00:22:37.130 --rc geninfo_all_blocks=1 00:22:37.130 --rc geninfo_unexecuted_blocks=1 00:22:37.130 00:22:37.130 ' 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:37.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.130 --rc genhtml_branch_coverage=1 00:22:37.130 --rc genhtml_function_coverage=1 00:22:37.130 --rc genhtml_legend=1 00:22:37.130 --rc geninfo_all_blocks=1 00:22:37.130 --rc geninfo_unexecuted_blocks=1 00:22:37.130 00:22:37.130 ' 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.130 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:37.131 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:37.131 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:37.131 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.131 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.131 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.131 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:37.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:37.131 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:37.131 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:37.131 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:37.131 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:37.131 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:37.131 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:37.131 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:37.131 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.131 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:37.131 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:37.131 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:37.131 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.131 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.131 03:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.131 03:03:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:37.131 03:03:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:37.131 03:03:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:37.131 03:03:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:42.409 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.409 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:42.409 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:42.409 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:42.409 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:42.409 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:42.678 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:42.678 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:42.678 Found net devices under 0000:af:00.0: cvl_0_0 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.678 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:42.679 Found net devices under 0000:af:00.1: cvl_0_1 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:42.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:22:42.679 00:22:42.679 --- 10.0.0.2 ping statistics --- 00:22:42.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.679 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:22:42.679 00:22:42.679 --- 10.0.0.1 ping statistics --- 00:22:42.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.679 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:42.679 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:42.943 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:42.943 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:42.943 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:42.943 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:42.943 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=314308 00:22:42.943 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:42.943 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 314308 00:22:42.943 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 314308 ']' 00:22:42.943 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.943 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:42.943 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.943 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:42.943 03:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:42.943 [2024-12-14 03:03:57.896392] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:42.943 [2024-12-14 03:03:57.896436] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:42.943 [2024-12-14 03:03:57.978512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:42.943 [2024-12-14 03:03:58.013355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.943 [2024-12-14 03:03:58.013387] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.943 [2024-12-14 03:03:58.013394] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.943 [2024-12-14 03:03:58.013399] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.943 [2024-12-14 03:03:58.013404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.943 [2024-12-14 03:03:58.014473] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:22:42.943 [2024-12-14 03:03:58.014587] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:22:42.943 [2024-12-14 03:03:58.014715] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:42.943 [2024-12-14 03:03:58.014717] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.210 [2024-12-14 03:03:58.162849] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.210 Malloc0 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.210 [2024-12-14 03:03:58.207131] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.210 { 00:22:43.210 "params": { 00:22:43.210 "name": "Nvme$subsystem", 00:22:43.210 "trtype": "$TEST_TRANSPORT", 00:22:43.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.210 "adrfam": "ipv4", 00:22:43.210 "trsvcid": "$NVMF_PORT", 00:22:43.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.210 "hdgst": ${hdgst:-false}, 00:22:43.210 "ddgst": ${ddgst:-false} 00:22:43.210 }, 00:22:43.210 "method": "bdev_nvme_attach_controller" 00:22:43.210 } 00:22:43.210 EOF 00:22:43.210 )") 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:43.210 03:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:43.210 "params": { 00:22:43.210 "name": "Nvme1", 00:22:43.210 "trtype": "tcp", 00:22:43.210 "traddr": "10.0.0.2", 00:22:43.210 "adrfam": "ipv4", 00:22:43.210 "trsvcid": "4420", 00:22:43.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.210 "hdgst": false, 00:22:43.210 "ddgst": false 00:22:43.210 }, 00:22:43.210 "method": "bdev_nvme_attach_controller" 00:22:43.210 }' 00:22:43.210 [2024-12-14 03:03:58.258683] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:43.210 [2024-12-14 03:03:58.258724] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid314335 ] 00:22:43.210 [2024-12-14 03:03:58.335111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:43.479 [2024-12-14 03:03:58.372452] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.479 [2024-12-14 03:03:58.372560] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.479 [2024-12-14 03:03:58.372560] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.479 I/O targets: 00:22:43.479 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:43.479 00:22:43.479 00:22:43.479 CUnit - A unit testing framework for C - Version 2.1-3 00:22:43.479 http://cunit.sourceforge.net/ 00:22:43.479 00:22:43.479 00:22:43.479 Suite: bdevio tests on: Nvme1n1 00:22:43.751 Test: blockdev write read block ...passed 00:22:43.751 Test: blockdev write zeroes read block ...passed 00:22:43.751 Test: blockdev write zeroes read no split ...passed 00:22:43.751 Test: blockdev write zeroes read split ...passed 00:22:43.751 Test: blockdev write zeroes read split partial ...passed 00:22:43.751 Test: blockdev reset ...[2024-12-14 03:03:58.741026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:43.751 [2024-12-14 03:03:58.741087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fded00 (9): Bad file descriptor 00:22:43.751 [2024-12-14 03:03:58.795185] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:43.751 passed 00:22:43.751 Test: blockdev write read 8 blocks ...passed 00:22:43.751 Test: blockdev write read size > 128k ...passed 00:22:43.751 Test: blockdev write read invalid size ...passed 00:22:43.751 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:43.751 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:43.751 Test: blockdev write read max offset ...passed 00:22:44.027 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:44.027 Test: blockdev writev readv 8 blocks ...passed 00:22:44.027 Test: blockdev writev readv 30 x 1block ...passed 00:22:44.027 Test: blockdev writev readv block ...passed 00:22:44.027 Test: blockdev writev readv size > 128k ...passed 00:22:44.027 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:44.027 Test: blockdev comparev and writev ...[2024-12-14 03:03:59.011471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.027 [2024-12-14 03:03:59.011503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.028 [2024-12-14 03:03:59.011518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.028 [2024-12-14 03:03:59.011525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:44.028 [2024-12-14 03:03:59.011855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.028 [2024-12-14 03:03:59.011865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:44.028 [2024-12-14 03:03:59.011876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.028 [2024-12-14 03:03:59.011883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:44.028 [2024-12-14 03:03:59.012237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.028 [2024-12-14 03:03:59.012248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:44.028 [2024-12-14 03:03:59.012259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.028 [2024-12-14 03:03:59.012266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:44.028 [2024-12-14 03:03:59.012625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.028 [2024-12-14 03:03:59.012635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:44.028 [2024-12-14 03:03:59.012647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.028 [2024-12-14 03:03:59.012654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:44.028 passed 00:22:44.028 Test: blockdev nvme passthru rw ...passed 00:22:44.028 Test: blockdev nvme passthru vendor specific ...[2024-12-14 03:03:59.094686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:44.028 [2024-12-14 03:03:59.094700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:44.028 [2024-12-14 03:03:59.094808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:44.028 [2024-12-14 03:03:59.094818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:44.028 [2024-12-14 03:03:59.094934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:44.028 [2024-12-14 03:03:59.094943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:44.028 [2024-12-14 03:03:59.095060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:44.028 [2024-12-14 03:03:59.095073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:44.028 passed 00:22:44.028 Test: blockdev nvme admin passthru ...passed 00:22:44.028 Test: blockdev copy ...passed 00:22:44.028 00:22:44.028 Run Summary: Type Total Ran Passed Failed Inactive 00:22:44.028 suites 1 1 n/a 0 0 00:22:44.028 tests 23 23 23 0 0 00:22:44.028 asserts 152 152 152 0 n/a 00:22:44.028 00:22:44.028 Elapsed time = 1.161 seconds 00:22:44.305 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:44.305 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.305 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:44.305 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.305 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:44.305 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:44.305 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:44.305 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:44.305 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:44.305 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:44.305 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:44.305 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:44.305 rmmod nvme_tcp 00:22:44.305 rmmod nvme_fabrics 00:22:44.580 rmmod nvme_keyring 00:22:44.580 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:44.580 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:44.580 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:44.580 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 314308 ']' 00:22:44.580 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 314308 00:22:44.580 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 314308 ']' 00:22:44.580 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 314308 00:22:44.580 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:44.580 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.581 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 314308 00:22:44.581 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:44.581 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:44.581 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 314308' 00:22:44.581 killing process with pid 314308 00:22:44.581 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 314308 00:22:44.581 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 314308 00:22:44.867 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:44.867 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:44.867 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:44.867 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:44.867 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:44.867 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:44.867 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:44.867 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:44.867 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:44.867 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.867 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.867 03:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.844 03:04:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:46.844 00:22:46.844 real 0m10.079s 00:22:46.844 user 0m10.757s 00:22:46.844 sys 0m5.192s 00:22:46.844 03:04:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.844 03:04:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:46.844 ************************************ 00:22:46.844 END TEST nvmf_bdevio_no_huge 00:22:46.844 ************************************ 00:22:46.844 03:04:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:46.844 03:04:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:46.844 03:04:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.844 03:04:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:46.844 ************************************ 00:22:46.844 START TEST nvmf_tls 00:22:46.844 ************************************ 00:22:46.844 03:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:47.129 * Looking for test storage... 00:22:47.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:47.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.129 --rc genhtml_branch_coverage=1 00:22:47.129 --rc genhtml_function_coverage=1 00:22:47.129 --rc genhtml_legend=1 00:22:47.129 --rc geninfo_all_blocks=1 00:22:47.129 --rc geninfo_unexecuted_blocks=1 00:22:47.129 00:22:47.129 ' 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:47.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.129 --rc genhtml_branch_coverage=1 00:22:47.129 --rc genhtml_function_coverage=1 00:22:47.129 --rc genhtml_legend=1 00:22:47.129 --rc geninfo_all_blocks=1 00:22:47.129 --rc geninfo_unexecuted_blocks=1 00:22:47.129 00:22:47.129 ' 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:47.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.129 --rc genhtml_branch_coverage=1 00:22:47.129 --rc genhtml_function_coverage=1 00:22:47.129 --rc genhtml_legend=1 00:22:47.129 --rc geninfo_all_blocks=1 00:22:47.129 --rc geninfo_unexecuted_blocks=1 00:22:47.129 00:22:47.129 ' 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:47.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.129 --rc genhtml_branch_coverage=1 00:22:47.129 --rc genhtml_function_coverage=1 00:22:47.129 --rc genhtml_legend=1 00:22:47.129 --rc geninfo_all_blocks=1 00:22:47.129 --rc geninfo_unexecuted_blocks=1 00:22:47.129 00:22:47.129 ' 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.129 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:47.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:47.130 03:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:52.618 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:52.618 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:52.618 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:52.619 Found net devices under 0000:af:00.0: cvl_0_0 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:52.619 Found net devices under 0000:af:00.1: cvl_0_1 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:52.619 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:52.878 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:52.878 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:52.878 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:52.878 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:52.878 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:52.878 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:52.878 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:52.878 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:52.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:22:52.878 00:22:52.879 --- 10.0.0.2 ping statistics --- 00:22:52.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.879 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:22:52.879 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:52.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:22:52.879 00:22:52.879 --- 10.0.0.1 ping statistics --- 00:22:52.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.879 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:22:52.879 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.879 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:52.879 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:52.879 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.879 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:52.879 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:52.879 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.879 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:52.879 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:52.879 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:52.879 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:52.879 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:52.879 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.879 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=316616 00:22:52.879 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 316616 00:22:52.879 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:52.879 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 316616 ']' 00:22:52.879 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.879 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.879 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.879 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.879 03:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.138 [2024-12-14 03:04:08.043488] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:53.138 [2024-12-14 03:04:08.043537] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.138 [2024-12-14 03:04:08.124005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.138 [2024-12-14 03:04:08.145568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.138 [2024-12-14 03:04:08.145602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.138 [2024-12-14 03:04:08.145608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.138 [2024-12-14 03:04:08.145615] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.138 [2024-12-14 03:04:08.145620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.138 [2024-12-14 03:04:08.146081] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.138 03:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.138 03:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:53.138 03:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:53.138 03:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:53.138 03:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.138 03:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.138 03:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:53.138 03:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:53.396 true 00:22:53.396 03:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:53.396 03:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:53.655 03:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:53.655 03:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:53.655 03:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:53.914 03:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:53.914 03:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:53.914 03:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:53.914 03:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:53.914 03:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:54.173 03:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:54.173 03:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:54.432 03:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:54.432 03:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:54.432 03:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:54.432 03:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:54.432 03:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:54.432 03:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:54.432 03:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:54.691 03:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:54.691 03:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:54.950 03:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:54.950 03:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:54.950 03:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:55.209 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:55.209 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:55.209 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:55.209 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:55.209 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:55.209 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:55.209 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:55.209 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:55.209 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:55.209 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:55.209 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:55.468 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:55.468 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:55.468 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:55.468 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:55.468 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:55.468 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:55.468 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:55.468 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:55.468 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:55.468 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:55.468 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.72Y4OdGFd7 00:22:55.468 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:55.468 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.83BVMzZ21Y 00:22:55.468 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:55.468 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:55.468 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.72Y4OdGFd7 00:22:55.468 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.83BVMzZ21Y 00:22:55.468 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:55.727 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:55.727 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.72Y4OdGFd7 00:22:55.727 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.72Y4OdGFd7 00:22:55.727 03:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:55.986 [2024-12-14 03:04:11.018252] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.986 03:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:56.244 03:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:56.503 [2024-12-14 03:04:11.391209] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:56.503 [2024-12-14 03:04:11.391404] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.503 03:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:56.503 malloc0 00:22:56.503 03:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:56.761 03:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.72Y4OdGFd7 00:22:57.020 03:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:57.020 03:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.72Y4OdGFd7 00:23:09.218 Initializing NVMe Controllers 00:23:09.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:09.218 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:09.218 Initialization complete. Launching workers. 00:23:09.218 ======================================================== 00:23:09.218 Latency(us) 00:23:09.218 Device Information : IOPS MiB/s Average min max 00:23:09.218 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16961.78 66.26 3773.26 882.20 4802.21 00:23:09.218 ======================================================== 00:23:09.218 Total : 16961.78 66.26 3773.26 882.20 4802.21 00:23:09.218 00:23:09.218 03:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.72Y4OdGFd7 00:23:09.218 03:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:09.218 03:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:09.218 03:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:09.218 03:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.72Y4OdGFd7 00:23:09.218 03:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:09.218 03:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=316860 00:23:09.218 03:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:09.218 03:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:09.218 03:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 316860 /var/tmp/bdevperf.sock 00:23:09.218 03:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 316860 ']' 00:23:09.218 03:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.218 03:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.218 03:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.218 03:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.218 03:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.219 [2024-12-14 03:04:22.319662] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:09.219 [2024-12-14 03:04:22.319707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316860 ] 00:23:09.219 [2024-12-14 03:04:22.393668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.219 [2024-12-14 03:04:22.416112] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.219 03:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.219 03:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:09.219 03:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.72Y4OdGFd7 00:23:09.219 03:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:09.219 [2024-12-14 03:04:22.850855] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:09.219 TLSTESTn1 00:23:09.219 03:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:09.219 Running I/O for 10 seconds... 00:23:10.153 5027.00 IOPS, 19.64 MiB/s [2024-12-14T02:04:26.221Z] 5228.50 IOPS, 20.42 MiB/s [2024-12-14T02:04:27.157Z] 5197.00 IOPS, 20.30 MiB/s [2024-12-14T02:04:28.093Z] 5306.00 IOPS, 20.73 MiB/s [2024-12-14T02:04:29.469Z] 5291.60 IOPS, 20.67 MiB/s [2024-12-14T02:04:30.406Z] 5292.67 IOPS, 20.67 MiB/s [2024-12-14T02:04:31.341Z] 5247.86 IOPS, 20.50 MiB/s [2024-12-14T02:04:32.277Z] 5245.88 IOPS, 20.49 MiB/s [2024-12-14T02:04:33.214Z] 5264.00 IOPS, 20.56 MiB/s [2024-12-14T02:04:33.214Z] 5279.70 IOPS, 20.62 MiB/s 00:23:18.081 Latency(us) 00:23:18.081 [2024-12-14T02:04:33.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.081 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:18.081 Verification LBA range: start 0x0 length 0x2000 00:23:18.081 TLSTESTn1 : 10.02 5282.93 20.64 0.00 0.00 24192.36 5586.16 31706.94 00:23:18.081 [2024-12-14T02:04:33.214Z] =================================================================================================================== 00:23:18.081 [2024-12-14T02:04:33.214Z] Total : 5282.93 20.64 0.00 0.00 24192.36 5586.16 31706.94 00:23:18.081 { 00:23:18.081 "results": [ 00:23:18.081 { 00:23:18.081 "job": "TLSTESTn1", 00:23:18.081 "core_mask": "0x4", 00:23:18.081 "workload": "verify", 00:23:18.081 "status": "finished", 00:23:18.081 "verify_range": { 00:23:18.081 "start": 0, 00:23:18.081 "length": 8192 00:23:18.081 }, 00:23:18.081 "queue_depth": 128, 00:23:18.081 "io_size": 4096, 00:23:18.081 "runtime": 10.017925, 00:23:18.081 "iops": 5282.9303473523705, 00:23:18.081 "mibps": 20.636446669345197, 00:23:18.081 "io_failed": 0, 00:23:18.081 "io_timeout": 0, 00:23:18.081 "avg_latency_us": 24192.36133783935, 00:23:18.081 "min_latency_us": 5586.1638095238095, 00:23:18.081 "max_latency_us": 31706.94095238095 00:23:18.081 } 00:23:18.081 ], 00:23:18.081 "core_count": 1 00:23:18.081 } 00:23:18.081 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:18.081 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 316860 00:23:18.081 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 316860 ']' 00:23:18.081 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 316860 00:23:18.081 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:18.081 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.081 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 316860 00:23:18.081 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:18.081 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:18.081 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 316860' 00:23:18.081 killing process with pid 316860 00:23:18.081 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 316860 00:23:18.081 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.081 00:23:18.081 Latency(us) 00:23:18.081 [2024-12-14T02:04:33.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.081 [2024-12-14T02:04:33.214Z] =================================================================================================================== 00:23:18.081 [2024-12-14T02:04:33.214Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.081 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 316860 00:23:18.340 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.83BVMzZ21Y 00:23:18.340 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:18.340 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.83BVMzZ21Y 00:23:18.340 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:18.340 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.340 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:18.340 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.340 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.83BVMzZ21Y 00:23:18.340 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:18.340 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:18.340 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:18.340 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.83BVMzZ21Y 00:23:18.340 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:18.340 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=316994 00:23:18.340 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:18.340 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:18.340 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 316994 /var/tmp/bdevperf.sock 00:23:18.340 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 316994 ']' 00:23:18.340 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.340 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:18.340 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.340 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:18.340 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.340 [2024-12-14 03:04:33.341246] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:18.340 [2024-12-14 03:04:33.341297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316994 ] 00:23:18.340 [2024-12-14 03:04:33.411712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.340 [2024-12-14 03:04:33.431101] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.599 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.599 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:18.599 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.83BVMzZ21Y 00:23:18.599 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:18.858 [2024-12-14 03:04:33.885750] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:18.858 [2024-12-14 03:04:33.897428] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:18.858 [2024-12-14 03:04:33.898005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106e0c0 (107): Transport endpoint is not connected 00:23:18.858 [2024-12-14 03:04:33.898999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106e0c0 (9): Bad file descriptor 00:23:18.858 [2024-12-14 03:04:33.900000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:18.858 [2024-12-14 03:04:33.900010] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:18.858 [2024-12-14 03:04:33.900018] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:18.858 [2024-12-14 03:04:33.900027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:18.858 request: 00:23:18.858 { 00:23:18.858 "name": "TLSTEST", 00:23:18.858 "trtype": "tcp", 00:23:18.858 "traddr": "10.0.0.2", 00:23:18.858 "adrfam": "ipv4", 00:23:18.858 "trsvcid": "4420", 00:23:18.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.858 "prchk_reftag": false, 00:23:18.858 "prchk_guard": false, 00:23:18.858 "hdgst": false, 00:23:18.858 "ddgst": false, 00:23:18.858 "psk": "key0", 00:23:18.858 "allow_unrecognized_csi": false, 00:23:18.858 "method": "bdev_nvme_attach_controller", 00:23:18.858 "req_id": 1 00:23:18.858 } 00:23:18.858 Got JSON-RPC error response 00:23:18.858 response: 00:23:18.858 { 00:23:18.858 "code": -5, 00:23:18.858 "message": "Input/output error" 00:23:18.858 } 00:23:18.858 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 316994 00:23:18.858 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 316994 ']' 00:23:18.858 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 316994 00:23:18.858 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:18.858 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.858 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 316994 00:23:18.858 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:18.858 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:18.858 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 316994' 00:23:18.858 killing process with pid 316994 00:23:18.858 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 316994 00:23:18.858 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.858 00:23:18.858 Latency(us) 00:23:18.858 [2024-12-14T02:04:33.991Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.858 [2024-12-14T02:04:33.991Z] =================================================================================================================== 00:23:18.858 [2024-12-14T02:04:33.991Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:18.858 03:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 316994 00:23:19.117 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:19.117 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:19.117 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:19.117 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:19.117 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:19.117 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.72Y4OdGFd7 00:23:19.117 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:19.117 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.72Y4OdGFd7 00:23:19.117 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:19.117 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:19.117 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:19.117 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:19.117 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.72Y4OdGFd7 00:23:19.117 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:19.117 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:19.117 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:19.117 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.72Y4OdGFd7 00:23:19.117 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:19.118 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=317020 00:23:19.118 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:19.118 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:19.118 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 317020 /var/tmp/bdevperf.sock 00:23:19.118 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 317020 ']' 00:23:19.118 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.118 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.118 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.118 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.118 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.118 [2024-12-14 03:04:34.168043] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:19.118 [2024-12-14 03:04:34.168095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317020 ] 00:23:19.118 [2024-12-14 03:04:34.230584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.376 [2024-12-14 03:04:34.250082] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.376 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:19.376 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:19.377 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.72Y4OdGFd7 00:23:19.635 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:19.635 [2024-12-14 03:04:34.688542] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:19.635 [2024-12-14 03:04:34.699401] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:19.635 [2024-12-14 03:04:34.699423] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:19.635 [2024-12-14 03:04:34.699446] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:19.635 [2024-12-14 03:04:34.699714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14470c0 (107): Transport endpoint is not connected 00:23:19.635 [2024-12-14 03:04:34.700709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14470c0 (9): Bad file descriptor 00:23:19.635 [2024-12-14 03:04:34.701710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:19.635 [2024-12-14 03:04:34.701719] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:19.635 [2024-12-14 03:04:34.701727] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:19.635 [2024-12-14 03:04:34.701734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:19.635 request: 00:23:19.635 { 00:23:19.635 "name": "TLSTEST", 00:23:19.635 "trtype": "tcp", 00:23:19.635 "traddr": "10.0.0.2", 00:23:19.635 "adrfam": "ipv4", 00:23:19.635 "trsvcid": "4420", 00:23:19.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.635 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:19.635 "prchk_reftag": false, 00:23:19.635 "prchk_guard": false, 00:23:19.635 "hdgst": false, 00:23:19.635 "ddgst": false, 00:23:19.635 "psk": "key0", 00:23:19.635 "allow_unrecognized_csi": false, 00:23:19.635 "method": "bdev_nvme_attach_controller", 00:23:19.635 "req_id": 1 00:23:19.635 } 00:23:19.635 Got JSON-RPC error response 00:23:19.635 response: 00:23:19.635 { 00:23:19.635 "code": -5, 00:23:19.635 "message": "Input/output error" 00:23:19.635 } 00:23:19.636 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 317020 00:23:19.636 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 317020 ']' 00:23:19.636 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 317020 00:23:19.636 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:19.636 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:19.636 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317020 00:23:19.894 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317020' 00:23:19.895 killing process with pid 317020 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 317020 00:23:19.895 Received shutdown signal, test time was about 10.000000 seconds 00:23:19.895 00:23:19.895 Latency(us) 00:23:19.895 [2024-12-14T02:04:35.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.895 [2024-12-14T02:04:35.028Z] =================================================================================================================== 00:23:19.895 [2024-12-14T02:04:35.028Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 317020 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.72Y4OdGFd7 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.72Y4OdGFd7 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.72Y4OdGFd7 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.72Y4OdGFd7 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=317037 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 317037 /var/tmp/bdevperf.sock 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 317037 ']' 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.895 03:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.895 [2024-12-14 03:04:34.977461] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:19.895 [2024-12-14 03:04:34.977509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317037 ] 00:23:20.153 [2024-12-14 03:04:35.048706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.153 [2024-12-14 03:04:35.068032] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.153 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.153 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:20.153 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.72Y4OdGFd7 00:23:20.411 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:20.411 [2024-12-14 03:04:35.518952] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.411 [2024-12-14 03:04:35.523475] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:20.411 [2024-12-14 03:04:35.523496] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:20.411 [2024-12-14 03:04:35.523521] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:20.411 [2024-12-14 03:04:35.524205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106b0c0 (107): Transport endpoint is not connected 00:23:20.411 [2024-12-14 03:04:35.525198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106b0c0 (9): Bad file descriptor 00:23:20.411 [2024-12-14 03:04:35.526199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:20.411 [2024-12-14 03:04:35.526208] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:20.411 [2024-12-14 03:04:35.526215] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:20.411 [2024-12-14 03:04:35.526223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:20.411 request: 00:23:20.411 { 00:23:20.411 "name": "TLSTEST", 00:23:20.411 "trtype": "tcp", 00:23:20.411 "traddr": "10.0.0.2", 00:23:20.411 "adrfam": "ipv4", 00:23:20.411 "trsvcid": "4420", 00:23:20.411 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:20.411 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.411 "prchk_reftag": false, 00:23:20.411 "prchk_guard": false, 00:23:20.411 "hdgst": false, 00:23:20.411 "ddgst": false, 00:23:20.411 "psk": "key0", 00:23:20.411 "allow_unrecognized_csi": false, 00:23:20.411 "method": "bdev_nvme_attach_controller", 00:23:20.411 "req_id": 1 00:23:20.411 } 00:23:20.411 Got JSON-RPC error response 00:23:20.412 response: 00:23:20.412 { 00:23:20.412 "code": -5, 00:23:20.412 "message": "Input/output error" 00:23:20.412 } 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 317037 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 317037 ']' 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 317037 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317037 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317037' 00:23:20.671 killing process with pid 317037 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 317037 00:23:20.671 Received shutdown signal, test time was about 10.000000 seconds 00:23:20.671 00:23:20.671 Latency(us) 00:23:20.671 [2024-12-14T02:04:35.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.671 [2024-12-14T02:04:35.804Z] =================================================================================================================== 00:23:20.671 [2024-12-14T02:04:35.804Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 317037 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=317060 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 317060 /var/tmp/bdevperf.sock 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 317060 ']' 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.671 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.671 [2024-12-14 03:04:35.797649] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:20.671 [2024-12-14 03:04:35.797704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317060 ] 00:23:20.930 [2024-12-14 03:04:35.868178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.930 [2024-12-14 03:04:35.887663] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.930 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.930 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:20.930 03:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:21.189 [2024-12-14 03:04:36.149459] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:21.189 [2024-12-14 03:04:36.149487] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:21.189 request: 00:23:21.189 { 00:23:21.189 "name": "key0", 00:23:21.189 "path": "", 00:23:21.189 "method": "keyring_file_add_key", 00:23:21.189 "req_id": 1 00:23:21.189 } 00:23:21.189 Got JSON-RPC error response 00:23:21.189 response: 00:23:21.189 { 00:23:21.189 "code": -1, 00:23:21.189 "message": "Operation not permitted" 00:23:21.189 } 00:23:21.189 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:21.448 [2024-12-14 03:04:36.338027] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:21.448 [2024-12-14 03:04:36.338060] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:21.448 request: 00:23:21.448 { 00:23:21.448 "name": "TLSTEST", 00:23:21.448 "trtype": "tcp", 00:23:21.448 "traddr": "10.0.0.2", 00:23:21.448 "adrfam": "ipv4", 00:23:21.448 "trsvcid": "4420", 00:23:21.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:21.448 "prchk_reftag": false, 00:23:21.448 "prchk_guard": false, 00:23:21.448 "hdgst": false, 00:23:21.448 "ddgst": false, 00:23:21.448 "psk": "key0", 00:23:21.448 "allow_unrecognized_csi": false, 00:23:21.448 "method": "bdev_nvme_attach_controller", 00:23:21.448 "req_id": 1 00:23:21.448 } 00:23:21.448 Got JSON-RPC error response 00:23:21.448 response: 00:23:21.448 { 00:23:21.448 "code": -126, 00:23:21.448 "message": "Required key not available" 00:23:21.448 } 00:23:21.448 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 317060 00:23:21.448 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 317060 ']' 00:23:21.448 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 317060 00:23:21.448 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:21.448 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.448 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317060 00:23:21.448 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:21.448 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:21.448 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317060' 00:23:21.448 killing process with pid 317060 00:23:21.448 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 317060 00:23:21.448 Received shutdown signal, test time was about 10.000000 seconds 00:23:21.448 00:23:21.448 Latency(us) 00:23:21.448 [2024-12-14T02:04:36.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.448 [2024-12-14T02:04:36.581Z] =================================================================================================================== 00:23:21.448 [2024-12-14T02:04:36.581Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:21.448 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 317060 00:23:21.448 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:21.448 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:21.448 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:21.448 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:21.449 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:21.449 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 316616 00:23:21.449 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 316616 ']' 00:23:21.449 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 316616 00:23:21.449 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:21.449 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.449 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 316616 00:23:21.708 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:21.708 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:21.708 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 316616' 00:23:21.708 killing process with pid 316616 00:23:21.708 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 316616 00:23:21.708 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 316616 00:23:21.708 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:21.708 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:21.708 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:21.708 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:21.708 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:21.708 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:21.708 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:21.708 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:21.708 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:21.708 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.FFI3omtsdz 00:23:21.708 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:21.708 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.FFI3omtsdz 00:23:21.708 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:21.709 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:21.709 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:21.709 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.709 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:21.709 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=317091 00:23:21.709 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 317091 00:23:21.709 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 317091 ']' 00:23:21.709 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.709 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.709 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.709 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.709 03:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.968 [2024-12-14 03:04:36.874007] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:21.968 [2024-12-14 03:04:36.874056] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.968 [2024-12-14 03:04:36.952427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.968 [2024-12-14 03:04:36.970464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.968 [2024-12-14 03:04:36.970508] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.968 [2024-12-14 03:04:36.970518] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.968 [2024-12-14 03:04:36.970526] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.968 [2024-12-14 03:04:36.970533] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.968 [2024-12-14 03:04:36.971095] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.968 03:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.968 03:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:21.968 03:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:21.969 03:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:21.969 03:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.227 03:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.227 03:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.FFI3omtsdz 00:23:22.227 03:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.FFI3omtsdz 00:23:22.227 03:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:22.227 [2024-12-14 03:04:37.264762] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.227 03:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:22.486 03:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:22.745 [2024-12-14 03:04:37.649743] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:22.745 [2024-12-14 03:04:37.649922] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.745 03:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:22.745 malloc0 00:23:22.745 03:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:23.003 03:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.FFI3omtsdz 00:23:23.262 03:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:23.520 03:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FFI3omtsdz 00:23:23.520 03:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:23.520 03:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:23.520 03:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:23.520 03:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.FFI3omtsdz 00:23:23.520 03:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:23.520 03:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:23.520 03:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=317136 00:23:23.520 03:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:23.520 03:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 317136 /var/tmp/bdevperf.sock 00:23:23.520 03:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 317136 ']' 00:23:23.520 03:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.520 03:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.520 03:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.520 03:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.520 03:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.520 [2024-12-14 03:04:38.457355] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:23.520 [2024-12-14 03:04:38.457403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317136 ] 00:23:23.520 [2024-12-14 03:04:38.529563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.521 [2024-12-14 03:04:38.551347] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.521 03:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.521 03:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:23.521 03:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FFI3omtsdz 00:23:23.779 03:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:24.037 [2024-12-14 03:04:38.994012] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:24.037 TLSTESTn1 00:23:24.037 03:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:24.296 Running I/O for 10 seconds... 00:23:26.167 5277.00 IOPS, 20.61 MiB/s [2024-12-14T02:04:42.236Z] 5397.00 IOPS, 21.08 MiB/s [2024-12-14T02:04:43.613Z] 5353.33 IOPS, 20.91 MiB/s [2024-12-14T02:04:44.549Z] 5164.50 IOPS, 20.17 MiB/s [2024-12-14T02:04:45.484Z] 5216.40 IOPS, 20.38 MiB/s [2024-12-14T02:04:46.420Z] 5201.83 IOPS, 20.32 MiB/s [2024-12-14T02:04:47.356Z] 5185.00 IOPS, 20.25 MiB/s [2024-12-14T02:04:48.291Z] 5107.38 IOPS, 19.95 MiB/s [2024-12-14T02:04:49.227Z] 5033.56 IOPS, 19.66 MiB/s [2024-12-14T02:04:49.227Z] 4972.00 IOPS, 19.42 MiB/s 00:23:34.094 Latency(us) 00:23:34.094 [2024-12-14T02:04:49.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.094 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:34.094 Verification LBA range: start 0x0 length 0x2000 00:23:34.094 TLSTESTn1 : 10.02 4973.79 19.43 0.00 0.00 25691.80 6085.49 31082.79 00:23:34.094 [2024-12-14T02:04:49.227Z] =================================================================================================================== 00:23:34.094 [2024-12-14T02:04:49.227Z] Total : 4973.79 19.43 0.00 0.00 25691.80 6085.49 31082.79 00:23:34.353 { 00:23:34.353 "results": [ 00:23:34.353 { 00:23:34.353 "job": "TLSTESTn1", 00:23:34.353 "core_mask": "0x4", 00:23:34.353 "workload": "verify", 00:23:34.353 "status": "finished", 00:23:34.353 "verify_range": { 00:23:34.353 "start": 0, 00:23:34.353 "length": 8192 00:23:34.353 }, 00:23:34.353 "queue_depth": 128, 00:23:34.353 "io_size": 4096, 00:23:34.353 "runtime": 10.022127, 00:23:34.353 "iops": 4973.794484943166, 00:23:34.353 "mibps": 19.428884706809242, 00:23:34.353 "io_failed": 0, 00:23:34.353 "io_timeout": 0, 00:23:34.353 "avg_latency_us": 25691.800409320524, 00:23:34.353 "min_latency_us": 6085.4857142857145, 00:23:34.353 "max_latency_us": 31082.788571428573 00:23:34.353 } 00:23:34.353 ], 00:23:34.353 "core_count": 1 00:23:34.353 } 00:23:34.353 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:34.353 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 317136 00:23:34.353 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 317136 ']' 00:23:34.353 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 317136 00:23:34.353 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:34.353 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:34.353 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317136 00:23:34.353 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317136' 00:23:34.354 killing process with pid 317136 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 317136 00:23:34.354 Received shutdown signal, test time was about 10.000000 seconds 00:23:34.354 00:23:34.354 Latency(us) 00:23:34.354 [2024-12-14T02:04:49.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.354 [2024-12-14T02:04:49.487Z] =================================================================================================================== 00:23:34.354 [2024-12-14T02:04:49.487Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 317136 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.FFI3omtsdz 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FFI3omtsdz 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FFI3omtsdz 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FFI3omtsdz 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.FFI3omtsdz 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=317278 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 317278 /var/tmp/bdevperf.sock 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 317278 ']' 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.354 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.612 [2024-12-14 03:04:49.509933] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:34.612 [2024-12-14 03:04:49.509985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317278 ] 00:23:34.612 [2024-12-14 03:04:49.582939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.612 [2024-12-14 03:04:49.603006] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.612 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.612 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:34.612 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FFI3omtsdz 00:23:34.871 [2024-12-14 03:04:49.856919] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.FFI3omtsdz': 0100666 00:23:34.871 [2024-12-14 03:04:49.856949] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:34.871 request: 00:23:34.871 { 00:23:34.871 "name": "key0", 00:23:34.871 "path": "/tmp/tmp.FFI3omtsdz", 00:23:34.871 "method": "keyring_file_add_key", 00:23:34.871 "req_id": 1 00:23:34.871 } 00:23:34.871 Got JSON-RPC error response 00:23:34.871 response: 00:23:34.871 { 00:23:34.871 "code": -1, 00:23:34.871 "message": "Operation not permitted" 00:23:34.871 } 00:23:34.871 03:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:35.130 [2024-12-14 03:04:50.049508] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:35.130 [2024-12-14 03:04:50.049542] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:35.130 request: 00:23:35.130 { 00:23:35.130 "name": "TLSTEST", 00:23:35.130 "trtype": "tcp", 00:23:35.130 "traddr": "10.0.0.2", 00:23:35.130 "adrfam": "ipv4", 00:23:35.130 "trsvcid": "4420", 00:23:35.130 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.130 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:35.130 "prchk_reftag": false, 00:23:35.130 "prchk_guard": false, 00:23:35.130 "hdgst": false, 00:23:35.130 "ddgst": false, 00:23:35.130 "psk": "key0", 00:23:35.130 "allow_unrecognized_csi": false, 00:23:35.130 "method": "bdev_nvme_attach_controller", 00:23:35.130 "req_id": 1 00:23:35.130 } 00:23:35.130 Got JSON-RPC error response 00:23:35.130 response: 00:23:35.130 { 00:23:35.130 "code": -126, 00:23:35.130 "message": "Required key not available" 00:23:35.130 } 00:23:35.130 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 317278 00:23:35.130 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 317278 ']' 00:23:35.130 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 317278 00:23:35.130 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:35.130 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:35.130 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317278 00:23:35.130 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:35.130 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:35.130 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317278' 00:23:35.130 killing process with pid 317278 00:23:35.130 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 317278 00:23:35.130 Received shutdown signal, test time was about 10.000000 seconds 00:23:35.130 00:23:35.130 Latency(us) 00:23:35.130 [2024-12-14T02:04:50.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.130 [2024-12-14T02:04:50.263Z] =================================================================================================================== 00:23:35.130 [2024-12-14T02:04:50.263Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:35.130 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 317278 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 317091 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 317091 ']' 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 317091 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317091 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317091' 00:23:35.390 killing process with pid 317091 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 317091 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 317091 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=317309 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 317309 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 317309 ']' 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.390 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.649 [2024-12-14 03:04:50.540484] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:35.649 [2024-12-14 03:04:50.540533] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.649 [2024-12-14 03:04:50.616059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.649 [2024-12-14 03:04:50.633797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.649 [2024-12-14 03:04:50.633832] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.649 [2024-12-14 03:04:50.633840] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.649 [2024-12-14 03:04:50.633846] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.649 [2024-12-14 03:04:50.633851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.649 [2024-12-14 03:04:50.634335] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.649 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.649 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:35.649 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:35.649 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:35.649 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.649 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.649 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.FFI3omtsdz 00:23:35.649 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:35.649 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.FFI3omtsdz 00:23:35.649 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:35.649 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:35.649 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:35.649 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:35.649 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.FFI3omtsdz 00:23:35.649 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.FFI3omtsdz 00:23:35.649 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:35.908 [2024-12-14 03:04:50.928334] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.908 03:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:36.167 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:36.425 [2024-12-14 03:04:51.313323] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:36.425 [2024-12-14 03:04:51.313513] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.425 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:36.425 malloc0 00:23:36.426 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:36.684 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.FFI3omtsdz 00:23:36.943 [2024-12-14 03:04:51.866617] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.FFI3omtsdz': 0100666 00:23:36.943 [2024-12-14 03:04:51.866640] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:36.943 request: 00:23:36.943 { 00:23:36.943 "name": "key0", 00:23:36.943 "path": "/tmp/tmp.FFI3omtsdz", 00:23:36.943 "method": "keyring_file_add_key", 00:23:36.943 "req_id": 1 00:23:36.943 } 00:23:36.943 Got JSON-RPC error response 00:23:36.943 response: 00:23:36.943 { 00:23:36.943 "code": -1, 00:23:36.943 "message": "Operation not permitted" 00:23:36.943 } 00:23:36.943 03:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:36.943 [2024-12-14 03:04:52.059137] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:36.943 [2024-12-14 03:04:52.059167] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:36.943 request: 00:23:36.943 { 00:23:36.943 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.943 "host": "nqn.2016-06.io.spdk:host1", 00:23:36.943 "psk": "key0", 00:23:36.943 "method": "nvmf_subsystem_add_host", 00:23:36.943 "req_id": 1 00:23:36.943 } 00:23:36.943 Got JSON-RPC error response 00:23:36.943 response: 00:23:36.943 { 00:23:36.943 "code": -32603, 00:23:36.943 "message": "Internal error" 00:23:36.943 } 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 317309 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 317309 ']' 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 317309 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317309 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317309' 00:23:37.202 killing process with pid 317309 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 317309 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 317309 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.FFI3omtsdz 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=317368 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 317368 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 317368 ']' 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.202 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.462 [2024-12-14 03:04:52.362232] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:37.462 [2024-12-14 03:04:52.362274] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.462 [2024-12-14 03:04:52.432710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.462 [2024-12-14 03:04:52.453597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.462 [2024-12-14 03:04:52.453631] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.462 [2024-12-14 03:04:52.453639] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.462 [2024-12-14 03:04:52.453645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.462 [2024-12-14 03:04:52.453649] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.462 [2024-12-14 03:04:52.454129] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.462 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.462 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:37.462 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:37.462 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:37.462 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.462 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.462 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.FFI3omtsdz 00:23:37.462 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.FFI3omtsdz 00:23:37.462 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:37.720 [2024-12-14 03:04:52.749332] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.720 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:37.979 03:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:38.237 [2024-12-14 03:04:53.134315] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:38.237 [2024-12-14 03:04:53.134493] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.237 03:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:38.237 malloc0 00:23:38.238 03:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:38.496 03:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.FFI3omtsdz 00:23:38.754 03:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:39.013 03:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:39.013 03:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=317409 00:23:39.013 03:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:39.013 03:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 317409 /var/tmp/bdevperf.sock 00:23:39.013 03:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 317409 ']' 00:23:39.013 03:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:39.013 03:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.013 03:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:39.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:39.013 03:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.013 03:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.013 [2024-12-14 03:04:53.963436] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:39.013 [2024-12-14 03:04:53.963485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317409 ] 00:23:39.013 [2024-12-14 03:04:54.035973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.013 [2024-12-14 03:04:54.058932] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.272 03:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.272 03:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:39.272 03:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FFI3omtsdz 00:23:39.272 03:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:39.530 [2024-12-14 03:04:54.501671] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.530 TLSTESTn1 00:23:39.530 03:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:39.789 03:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:39.789 "subsystems": [ 00:23:39.789 { 00:23:39.789 "subsystem": "keyring", 00:23:39.789 "config": [ 00:23:39.789 { 00:23:39.789 "method": "keyring_file_add_key", 00:23:39.789 "params": { 00:23:39.789 "name": "key0", 00:23:39.789 "path": "/tmp/tmp.FFI3omtsdz" 00:23:39.789 } 00:23:39.789 } 00:23:39.789 ] 00:23:39.789 }, 00:23:39.789 { 00:23:39.789 "subsystem": "iobuf", 00:23:39.789 "config": [ 00:23:39.789 { 00:23:39.789 "method": "iobuf_set_options", 00:23:39.789 "params": { 00:23:39.789 "small_pool_count": 8192, 00:23:39.789 "large_pool_count": 1024, 00:23:39.789 "small_bufsize": 8192, 00:23:39.789 "large_bufsize": 135168, 00:23:39.789 "enable_numa": false 00:23:39.789 } 00:23:39.789 } 00:23:39.789 ] 00:23:39.789 }, 00:23:39.789 { 00:23:39.789 "subsystem": "sock", 00:23:39.789 "config": [ 00:23:39.789 { 00:23:39.789 "method": "sock_set_default_impl", 00:23:39.789 "params": { 00:23:39.789 "impl_name": "posix" 00:23:39.789 } 00:23:39.789 }, 00:23:39.789 { 00:23:39.789 "method": "sock_impl_set_options", 00:23:39.789 "params": { 00:23:39.789 "impl_name": "ssl", 00:23:39.789 "recv_buf_size": 4096, 00:23:39.789 "send_buf_size": 4096, 00:23:39.789 "enable_recv_pipe": true, 00:23:39.789 "enable_quickack": false, 00:23:39.789 "enable_placement_id": 0, 00:23:39.789 "enable_zerocopy_send_server": true, 00:23:39.789 "enable_zerocopy_send_client": false, 00:23:39.789 "zerocopy_threshold": 0, 00:23:39.789 "tls_version": 0, 00:23:39.789 "enable_ktls": false 00:23:39.789 } 00:23:39.789 }, 00:23:39.789 { 00:23:39.789 "method": "sock_impl_set_options", 00:23:39.789 "params": { 00:23:39.789 "impl_name": "posix", 00:23:39.789 "recv_buf_size": 2097152, 00:23:39.789 "send_buf_size": 2097152, 00:23:39.789 "enable_recv_pipe": true, 00:23:39.789 "enable_quickack": false, 00:23:39.789 "enable_placement_id": 0, 00:23:39.789 "enable_zerocopy_send_server": true, 00:23:39.789 "enable_zerocopy_send_client": false, 00:23:39.789 "zerocopy_threshold": 0, 00:23:39.789 "tls_version": 0, 00:23:39.789 "enable_ktls": false 00:23:39.789 } 00:23:39.789 } 00:23:39.789 ] 00:23:39.789 }, 00:23:39.789 { 00:23:39.789 "subsystem": "vmd", 00:23:39.789 "config": [] 00:23:39.789 }, 00:23:39.789 { 00:23:39.789 "subsystem": "accel", 00:23:39.789 "config": [ 00:23:39.789 { 00:23:39.789 "method": "accel_set_options", 00:23:39.789 "params": { 00:23:39.789 "small_cache_size": 128, 00:23:39.789 "large_cache_size": 16, 00:23:39.789 "task_count": 2048, 00:23:39.789 "sequence_count": 2048, 00:23:39.789 "buf_count": 2048 00:23:39.789 } 00:23:39.789 } 00:23:39.789 ] 00:23:39.789 }, 00:23:39.789 { 00:23:39.789 "subsystem": "bdev", 00:23:39.789 "config": [ 00:23:39.789 { 00:23:39.789 "method": "bdev_set_options", 00:23:39.789 "params": { 00:23:39.789 "bdev_io_pool_size": 65535, 00:23:39.789 "bdev_io_cache_size": 256, 00:23:39.789 "bdev_auto_examine": true, 00:23:39.789 "iobuf_small_cache_size": 128, 00:23:39.789 "iobuf_large_cache_size": 16 00:23:39.789 } 00:23:39.789 }, 00:23:39.789 { 00:23:39.789 "method": "bdev_raid_set_options", 00:23:39.789 "params": { 00:23:39.789 "process_window_size_kb": 1024, 00:23:39.789 "process_max_bandwidth_mb_sec": 0 00:23:39.789 } 00:23:39.789 }, 00:23:39.789 { 00:23:39.789 "method": "bdev_iscsi_set_options", 00:23:39.789 "params": { 00:23:39.789 "timeout_sec": 30 00:23:39.789 } 00:23:39.789 }, 00:23:39.789 { 00:23:39.789 "method": "bdev_nvme_set_options", 00:23:39.789 "params": { 00:23:39.789 "action_on_timeout": "none", 00:23:39.789 "timeout_us": 0, 00:23:39.789 "timeout_admin_us": 0, 00:23:39.789 "keep_alive_timeout_ms": 10000, 00:23:39.789 "arbitration_burst": 0, 00:23:39.789 "low_priority_weight": 0, 00:23:39.789 "medium_priority_weight": 0, 00:23:39.789 "high_priority_weight": 0, 00:23:39.789 "nvme_adminq_poll_period_us": 10000, 00:23:39.789 "nvme_ioq_poll_period_us": 0, 00:23:39.789 "io_queue_requests": 0, 00:23:39.789 "delay_cmd_submit": true, 00:23:39.789 "transport_retry_count": 4, 00:23:39.789 "bdev_retry_count": 3, 00:23:39.789 "transport_ack_timeout": 0, 00:23:39.789 "ctrlr_loss_timeout_sec": 0, 00:23:39.789 "reconnect_delay_sec": 0, 00:23:39.789 "fast_io_fail_timeout_sec": 0, 00:23:39.789 "disable_auto_failback": false, 00:23:39.789 "generate_uuids": false, 00:23:39.789 "transport_tos": 0, 00:23:39.789 "nvme_error_stat": false, 00:23:39.789 "rdma_srq_size": 0, 00:23:39.789 "io_path_stat": false, 00:23:39.789 "allow_accel_sequence": false, 00:23:39.789 "rdma_max_cq_size": 0, 00:23:39.789 "rdma_cm_event_timeout_ms": 0, 00:23:39.789 "dhchap_digests": [ 00:23:39.789 "sha256", 00:23:39.789 "sha384", 00:23:39.789 "sha512" 00:23:39.789 ], 00:23:39.789 "dhchap_dhgroups": [ 00:23:39.789 "null", 00:23:39.789 "ffdhe2048", 00:23:39.789 "ffdhe3072", 00:23:39.789 "ffdhe4096", 00:23:39.789 "ffdhe6144", 00:23:39.789 "ffdhe8192" 00:23:39.789 ], 00:23:39.789 "rdma_umr_per_io": false 00:23:39.789 } 00:23:39.789 }, 00:23:39.789 { 00:23:39.789 "method": "bdev_nvme_set_hotplug", 00:23:39.789 "params": { 00:23:39.789 "period_us": 100000, 00:23:39.789 "enable": false 00:23:39.789 } 00:23:39.789 }, 00:23:39.789 { 00:23:39.789 "method": "bdev_malloc_create", 00:23:39.789 "params": { 00:23:39.789 "name": "malloc0", 00:23:39.789 "num_blocks": 8192, 00:23:39.789 "block_size": 4096, 00:23:39.789 "physical_block_size": 4096, 00:23:39.789 "uuid": "580a9b61-9b0f-49df-a384-f8641fb32168", 00:23:39.789 "optimal_io_boundary": 0, 00:23:39.789 "md_size": 0, 00:23:39.789 "dif_type": 0, 00:23:39.789 "dif_is_head_of_md": false, 00:23:39.789 "dif_pi_format": 0 00:23:39.789 } 00:23:39.790 }, 00:23:39.790 { 00:23:39.790 "method": "bdev_wait_for_examine" 00:23:39.790 } 00:23:39.790 ] 00:23:39.790 }, 00:23:39.790 { 00:23:39.790 "subsystem": "nbd", 00:23:39.790 "config": [] 00:23:39.790 }, 00:23:39.790 { 00:23:39.790 "subsystem": "scheduler", 00:23:39.790 "config": [ 00:23:39.790 { 00:23:39.790 "method": "framework_set_scheduler", 00:23:39.790 "params": { 00:23:39.790 "name": "static" 00:23:39.790 } 00:23:39.790 } 00:23:39.790 ] 00:23:39.790 }, 00:23:39.790 { 00:23:39.790 "subsystem": "nvmf", 00:23:39.790 "config": [ 00:23:39.790 { 00:23:39.790 "method": "nvmf_set_config", 00:23:39.790 "params": { 00:23:39.790 "discovery_filter": "match_any", 00:23:39.790 "admin_cmd_passthru": { 00:23:39.790 "identify_ctrlr": false 00:23:39.790 }, 00:23:39.790 "dhchap_digests": [ 00:23:39.790 "sha256", 00:23:39.790 "sha384", 00:23:39.790 "sha512" 00:23:39.790 ], 00:23:39.790 "dhchap_dhgroups": [ 00:23:39.790 "null", 00:23:39.790 "ffdhe2048", 00:23:39.790 "ffdhe3072", 00:23:39.790 "ffdhe4096", 00:23:39.790 "ffdhe6144", 00:23:39.790 "ffdhe8192" 00:23:39.790 ] 00:23:39.790 } 00:23:39.790 }, 00:23:39.790 { 00:23:39.790 "method": "nvmf_set_max_subsystems", 00:23:39.790 "params": { 00:23:39.790 "max_subsystems": 1024 00:23:39.790 } 00:23:39.790 }, 00:23:39.790 { 00:23:39.790 "method": "nvmf_set_crdt", 00:23:39.790 "params": { 00:23:39.790 "crdt1": 0, 00:23:39.790 "crdt2": 0, 00:23:39.790 "crdt3": 0 00:23:39.790 } 00:23:39.790 }, 00:23:39.790 { 00:23:39.790 "method": "nvmf_create_transport", 00:23:39.790 "params": { 00:23:39.790 "trtype": "TCP", 00:23:39.790 "max_queue_depth": 128, 00:23:39.790 "max_io_qpairs_per_ctrlr": 127, 00:23:39.790 "in_capsule_data_size": 4096, 00:23:39.790 "max_io_size": 131072, 00:23:39.790 "io_unit_size": 131072, 00:23:39.790 "max_aq_depth": 128, 00:23:39.790 "num_shared_buffers": 511, 00:23:39.790 "buf_cache_size": 4294967295, 00:23:39.790 "dif_insert_or_strip": false, 00:23:39.790 "zcopy": false, 00:23:39.790 "c2h_success": false, 00:23:39.790 "sock_priority": 0, 00:23:39.790 "abort_timeout_sec": 1, 00:23:39.790 "ack_timeout": 0, 00:23:39.790 "data_wr_pool_size": 0 00:23:39.790 } 00:23:39.790 }, 00:23:39.790 { 00:23:39.790 "method": "nvmf_create_subsystem", 00:23:39.790 "params": { 00:23:39.790 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.790 "allow_any_host": false, 00:23:39.790 "serial_number": "SPDK00000000000001", 00:23:39.790 "model_number": "SPDK bdev Controller", 00:23:39.790 "max_namespaces": 10, 00:23:39.790 "min_cntlid": 1, 00:23:39.790 "max_cntlid": 65519, 00:23:39.790 "ana_reporting": false 00:23:39.790 } 00:23:39.790 }, 00:23:39.790 { 00:23:39.790 "method": "nvmf_subsystem_add_host", 00:23:39.790 "params": { 00:23:39.790 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.790 "host": "nqn.2016-06.io.spdk:host1", 00:23:39.790 "psk": "key0" 00:23:39.790 } 00:23:39.790 }, 00:23:39.790 { 00:23:39.790 "method": "nvmf_subsystem_add_ns", 00:23:39.790 "params": { 00:23:39.790 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.790 "namespace": { 00:23:39.790 "nsid": 1, 00:23:39.790 "bdev_name": "malloc0", 00:23:39.790 "nguid": "580A9B619B0F49DFA384F8641FB32168", 00:23:39.790 "uuid": "580a9b61-9b0f-49df-a384-f8641fb32168", 00:23:39.790 "no_auto_visible": false 00:23:39.790 } 00:23:39.790 } 00:23:39.790 }, 00:23:39.790 { 00:23:39.790 "method": "nvmf_subsystem_add_listener", 00:23:39.790 "params": { 00:23:39.790 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.790 "listen_address": { 00:23:39.790 "trtype": "TCP", 00:23:39.790 "adrfam": "IPv4", 00:23:39.790 "traddr": "10.0.0.2", 00:23:39.790 "trsvcid": "4420" 00:23:39.790 }, 00:23:39.790 "secure_channel": true 00:23:39.790 } 00:23:39.790 } 00:23:39.790 ] 00:23:39.790 } 00:23:39.790 ] 00:23:39.790 }' 00:23:39.790 03:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:40.049 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:40.049 "subsystems": [ 00:23:40.049 { 00:23:40.049 "subsystem": "keyring", 00:23:40.049 "config": [ 00:23:40.049 { 00:23:40.049 "method": "keyring_file_add_key", 00:23:40.049 "params": { 00:23:40.049 "name": "key0", 00:23:40.049 "path": "/tmp/tmp.FFI3omtsdz" 00:23:40.049 } 00:23:40.049 } 00:23:40.049 ] 00:23:40.049 }, 00:23:40.049 { 00:23:40.049 "subsystem": "iobuf", 00:23:40.049 "config": [ 00:23:40.049 { 00:23:40.049 "method": "iobuf_set_options", 00:23:40.049 "params": { 00:23:40.049 "small_pool_count": 8192, 00:23:40.049 "large_pool_count": 1024, 00:23:40.049 "small_bufsize": 8192, 00:23:40.049 "large_bufsize": 135168, 00:23:40.049 "enable_numa": false 00:23:40.049 } 00:23:40.049 } 00:23:40.049 ] 00:23:40.049 }, 00:23:40.049 { 00:23:40.049 "subsystem": "sock", 00:23:40.049 "config": [ 00:23:40.049 { 00:23:40.049 "method": "sock_set_default_impl", 00:23:40.049 "params": { 00:23:40.049 "impl_name": "posix" 00:23:40.049 } 00:23:40.049 }, 00:23:40.049 { 00:23:40.049 "method": "sock_impl_set_options", 00:23:40.049 "params": { 00:23:40.049 "impl_name": "ssl", 00:23:40.049 "recv_buf_size": 4096, 00:23:40.049 "send_buf_size": 4096, 00:23:40.049 "enable_recv_pipe": true, 00:23:40.049 "enable_quickack": false, 00:23:40.049 "enable_placement_id": 0, 00:23:40.049 "enable_zerocopy_send_server": true, 00:23:40.049 "enable_zerocopy_send_client": false, 00:23:40.049 "zerocopy_threshold": 0, 00:23:40.049 "tls_version": 0, 00:23:40.049 "enable_ktls": false 00:23:40.049 } 00:23:40.049 }, 00:23:40.049 { 00:23:40.049 "method": "sock_impl_set_options", 00:23:40.049 "params": { 00:23:40.049 "impl_name": "posix", 00:23:40.049 "recv_buf_size": 2097152, 00:23:40.049 "send_buf_size": 2097152, 00:23:40.049 "enable_recv_pipe": true, 00:23:40.049 "enable_quickack": false, 00:23:40.049 "enable_placement_id": 0, 00:23:40.049 "enable_zerocopy_send_server": true, 00:23:40.049 "enable_zerocopy_send_client": false, 00:23:40.049 "zerocopy_threshold": 0, 00:23:40.049 "tls_version": 0, 00:23:40.049 "enable_ktls": false 00:23:40.049 } 00:23:40.049 } 00:23:40.049 ] 00:23:40.049 }, 00:23:40.049 { 00:23:40.049 "subsystem": "vmd", 00:23:40.049 "config": [] 00:23:40.049 }, 00:23:40.049 { 00:23:40.049 "subsystem": "accel", 00:23:40.049 "config": [ 00:23:40.049 { 00:23:40.049 "method": "accel_set_options", 00:23:40.049 "params": { 00:23:40.049 "small_cache_size": 128, 00:23:40.049 "large_cache_size": 16, 00:23:40.049 "task_count": 2048, 00:23:40.049 "sequence_count": 2048, 00:23:40.049 "buf_count": 2048 00:23:40.049 } 00:23:40.049 } 00:23:40.049 ] 00:23:40.049 }, 00:23:40.049 { 00:23:40.049 "subsystem": "bdev", 00:23:40.049 "config": [ 00:23:40.049 { 00:23:40.049 "method": "bdev_set_options", 00:23:40.049 "params": { 00:23:40.049 "bdev_io_pool_size": 65535, 00:23:40.049 "bdev_io_cache_size": 256, 00:23:40.049 "bdev_auto_examine": true, 00:23:40.049 "iobuf_small_cache_size": 128, 00:23:40.049 "iobuf_large_cache_size": 16 00:23:40.049 } 00:23:40.049 }, 00:23:40.049 { 00:23:40.049 "method": "bdev_raid_set_options", 00:23:40.049 "params": { 00:23:40.049 "process_window_size_kb": 1024, 00:23:40.049 "process_max_bandwidth_mb_sec": 0 00:23:40.049 } 00:23:40.049 }, 00:23:40.049 { 00:23:40.049 "method": "bdev_iscsi_set_options", 00:23:40.049 "params": { 00:23:40.049 "timeout_sec": 30 00:23:40.049 } 00:23:40.049 }, 00:23:40.049 { 00:23:40.049 "method": "bdev_nvme_set_options", 00:23:40.049 "params": { 00:23:40.049 "action_on_timeout": "none", 00:23:40.049 "timeout_us": 0, 00:23:40.049 "timeout_admin_us": 0, 00:23:40.049 "keep_alive_timeout_ms": 10000, 00:23:40.049 "arbitration_burst": 0, 00:23:40.049 "low_priority_weight": 0, 00:23:40.049 "medium_priority_weight": 0, 00:23:40.049 "high_priority_weight": 0, 00:23:40.049 "nvme_adminq_poll_period_us": 10000, 00:23:40.049 "nvme_ioq_poll_period_us": 0, 00:23:40.049 "io_queue_requests": 512, 00:23:40.049 "delay_cmd_submit": true, 00:23:40.049 "transport_retry_count": 4, 00:23:40.049 "bdev_retry_count": 3, 00:23:40.049 "transport_ack_timeout": 0, 00:23:40.049 "ctrlr_loss_timeout_sec": 0, 00:23:40.049 "reconnect_delay_sec": 0, 00:23:40.049 "fast_io_fail_timeout_sec": 0, 00:23:40.049 "disable_auto_failback": false, 00:23:40.049 "generate_uuids": false, 00:23:40.049 "transport_tos": 0, 00:23:40.049 "nvme_error_stat": false, 00:23:40.049 "rdma_srq_size": 0, 00:23:40.049 "io_path_stat": false, 00:23:40.049 "allow_accel_sequence": false, 00:23:40.049 "rdma_max_cq_size": 0, 00:23:40.049 "rdma_cm_event_timeout_ms": 0, 00:23:40.049 "dhchap_digests": [ 00:23:40.049 "sha256", 00:23:40.049 "sha384", 00:23:40.049 "sha512" 00:23:40.049 ], 00:23:40.049 "dhchap_dhgroups": [ 00:23:40.049 "null", 00:23:40.049 "ffdhe2048", 00:23:40.049 "ffdhe3072", 00:23:40.049 "ffdhe4096", 00:23:40.049 "ffdhe6144", 00:23:40.049 "ffdhe8192" 00:23:40.049 ], 00:23:40.049 "rdma_umr_per_io": false 00:23:40.049 } 00:23:40.049 }, 00:23:40.049 { 00:23:40.049 "method": "bdev_nvme_attach_controller", 00:23:40.049 "params": { 00:23:40.049 "name": "TLSTEST", 00:23:40.049 "trtype": "TCP", 00:23:40.049 "adrfam": "IPv4", 00:23:40.049 "traddr": "10.0.0.2", 00:23:40.049 "trsvcid": "4420", 00:23:40.049 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.049 "prchk_reftag": false, 00:23:40.049 "prchk_guard": false, 00:23:40.049 "ctrlr_loss_timeout_sec": 0, 00:23:40.049 "reconnect_delay_sec": 0, 00:23:40.049 "fast_io_fail_timeout_sec": 0, 00:23:40.049 "psk": "key0", 00:23:40.049 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.049 "hdgst": false, 00:23:40.049 "ddgst": false, 00:23:40.049 "multipath": "multipath" 00:23:40.049 } 00:23:40.049 }, 00:23:40.049 { 00:23:40.049 "method": "bdev_nvme_set_hotplug", 00:23:40.049 "params": { 00:23:40.049 "period_us": 100000, 00:23:40.049 "enable": false 00:23:40.049 } 00:23:40.049 }, 00:23:40.049 { 00:23:40.050 "method": "bdev_wait_for_examine" 00:23:40.050 } 00:23:40.050 ] 00:23:40.050 }, 00:23:40.050 { 00:23:40.050 "subsystem": "nbd", 00:23:40.050 "config": [] 00:23:40.050 } 00:23:40.050 ] 00:23:40.050 }' 00:23:40.050 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 317409 00:23:40.050 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 317409 ']' 00:23:40.050 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 317409 00:23:40.050 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:40.050 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.050 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317409 00:23:40.050 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:40.050 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:40.050 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317409' 00:23:40.050 killing process with pid 317409 00:23:40.050 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 317409 00:23:40.050 Received shutdown signal, test time was about 10.000000 seconds 00:23:40.050 00:23:40.050 Latency(us) 00:23:40.050 [2024-12-14T02:04:55.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.050 [2024-12-14T02:04:55.183Z] =================================================================================================================== 00:23:40.050 [2024-12-14T02:04:55.183Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:40.050 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 317409 00:23:40.308 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 317368 00:23:40.308 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 317368 ']' 00:23:40.308 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 317368 00:23:40.308 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:40.308 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.308 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317368 00:23:40.308 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:40.308 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:40.308 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317368' 00:23:40.308 killing process with pid 317368 00:23:40.308 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 317368 00:23:40.308 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 317368 00:23:40.567 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:40.567 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:40.567 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.567 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:40.567 "subsystems": [ 00:23:40.567 { 00:23:40.567 "subsystem": "keyring", 00:23:40.567 "config": [ 00:23:40.567 { 00:23:40.567 "method": "keyring_file_add_key", 00:23:40.567 "params": { 00:23:40.567 "name": "key0", 00:23:40.567 "path": "/tmp/tmp.FFI3omtsdz" 00:23:40.567 } 00:23:40.567 } 00:23:40.567 ] 00:23:40.567 }, 00:23:40.567 { 00:23:40.567 "subsystem": "iobuf", 00:23:40.567 "config": [ 00:23:40.568 { 00:23:40.568 "method": "iobuf_set_options", 00:23:40.568 "params": { 00:23:40.568 "small_pool_count": 8192, 00:23:40.568 "large_pool_count": 1024, 00:23:40.568 "small_bufsize": 8192, 00:23:40.568 "large_bufsize": 135168, 00:23:40.568 "enable_numa": false 00:23:40.568 } 00:23:40.568 } 00:23:40.568 ] 00:23:40.568 }, 00:23:40.568 { 00:23:40.568 "subsystem": "sock", 00:23:40.568 "config": [ 00:23:40.568 { 00:23:40.568 "method": "sock_set_default_impl", 00:23:40.568 "params": { 00:23:40.568 "impl_name": "posix" 00:23:40.568 } 00:23:40.568 }, 00:23:40.568 { 00:23:40.568 "method": "sock_impl_set_options", 00:23:40.568 "params": { 00:23:40.568 "impl_name": "ssl", 00:23:40.568 "recv_buf_size": 4096, 00:23:40.568 "send_buf_size": 4096, 00:23:40.568 "enable_recv_pipe": true, 00:23:40.568 "enable_quickack": false, 00:23:40.568 "enable_placement_id": 0, 00:23:40.568 "enable_zerocopy_send_server": true, 00:23:40.568 "enable_zerocopy_send_client": false, 00:23:40.568 "zerocopy_threshold": 0, 00:23:40.568 "tls_version": 0, 00:23:40.568 "enable_ktls": false 00:23:40.568 } 00:23:40.568 }, 00:23:40.568 { 00:23:40.568 "method": "sock_impl_set_options", 00:23:40.568 "params": { 00:23:40.568 "impl_name": "posix", 00:23:40.568 "recv_buf_size": 2097152, 00:23:40.568 "send_buf_size": 2097152, 00:23:40.568 "enable_recv_pipe": true, 00:23:40.568 "enable_quickack": false, 00:23:40.568 "enable_placement_id": 0, 00:23:40.568 "enable_zerocopy_send_server": true, 00:23:40.568 "enable_zerocopy_send_client": false, 00:23:40.568 "zerocopy_threshold": 0, 00:23:40.568 "tls_version": 0, 00:23:40.568 "enable_ktls": false 00:23:40.568 } 00:23:40.568 } 00:23:40.568 ] 00:23:40.568 }, 00:23:40.568 { 00:23:40.568 "subsystem": "vmd", 00:23:40.568 "config": [] 00:23:40.568 }, 00:23:40.568 { 00:23:40.568 "subsystem": "accel", 00:23:40.568 "config": [ 00:23:40.568 { 00:23:40.568 "method": "accel_set_options", 00:23:40.568 "params": { 00:23:40.568 "small_cache_size": 128, 00:23:40.568 "large_cache_size": 16, 00:23:40.568 "task_count": 2048, 00:23:40.568 "sequence_count": 2048, 00:23:40.568 "buf_count": 2048 00:23:40.568 } 00:23:40.568 } 00:23:40.568 ] 00:23:40.568 }, 00:23:40.568 { 00:23:40.568 "subsystem": "bdev", 00:23:40.568 "config": [ 00:23:40.568 { 00:23:40.568 "method": "bdev_set_options", 00:23:40.568 "params": { 00:23:40.568 "bdev_io_pool_size": 65535, 00:23:40.568 "bdev_io_cache_size": 256, 00:23:40.568 "bdev_auto_examine": true, 00:23:40.568 "iobuf_small_cache_size": 128, 00:23:40.568 "iobuf_large_cache_size": 16 00:23:40.568 } 00:23:40.568 }, 00:23:40.568 { 00:23:40.568 "method": "bdev_raid_set_options", 00:23:40.568 "params": { 00:23:40.568 "process_window_size_kb": 1024, 00:23:40.568 "process_max_bandwidth_mb_sec": 0 00:23:40.568 } 00:23:40.568 }, 00:23:40.568 { 00:23:40.568 "method": "bdev_iscsi_set_options", 00:23:40.568 "params": { 00:23:40.568 "timeout_sec": 30 00:23:40.568 } 00:23:40.568 }, 00:23:40.568 { 00:23:40.568 "method": "bdev_nvme_set_options", 00:23:40.568 "params": { 00:23:40.568 "action_on_timeout": "none", 00:23:40.568 "timeout_us": 0, 00:23:40.568 "timeout_admin_us": 0, 00:23:40.568 "keep_alive_timeout_ms": 10000, 00:23:40.568 "arbitration_burst": 0, 00:23:40.568 "low_priority_weight": 0, 00:23:40.568 "medium_priority_weight": 0, 00:23:40.568 "high_priority_weight": 0, 00:23:40.568 "nvme_adminq_poll_period_us": 10000, 00:23:40.568 "nvme_ioq_poll_period_us": 0, 00:23:40.568 "io_queue_requests": 0, 00:23:40.568 "delay_cmd_submit": true, 00:23:40.568 "transport_retry_count": 4, 00:23:40.568 "bdev_retry_count": 3, 00:23:40.568 "transport_ack_timeout": 0, 00:23:40.568 "ctrlr_loss_timeout_sec": 0, 00:23:40.568 "reconnect_delay_sec": 0, 00:23:40.568 "fast_io_fail_timeout_sec": 0, 00:23:40.568 "disable_auto_failback": false, 00:23:40.568 "generate_uuids": false, 00:23:40.568 "transport_tos": 0, 00:23:40.568 "nvme_error_stat": false, 00:23:40.568 "rdma_srq_size": 0, 00:23:40.568 "io_path_stat": false, 00:23:40.568 "allow_accel_sequence": false, 00:23:40.568 "rdma_max_cq_size": 0, 00:23:40.568 "rdma_cm_event_timeout_ms": 0, 00:23:40.568 "dhchap_digests": [ 00:23:40.568 "sha256", 00:23:40.568 "sha384", 00:23:40.568 "sha512" 00:23:40.568 ], 00:23:40.568 "dhchap_dhgroups": [ 00:23:40.568 "null", 00:23:40.568 "ffdhe2048", 00:23:40.568 "ffdhe3072", 00:23:40.568 "ffdhe4096", 00:23:40.568 "ffdhe6144", 00:23:40.568 "ffdhe8192" 00:23:40.568 ], 00:23:40.568 "rdma_umr_per_io": false 00:23:40.568 } 00:23:40.568 }, 00:23:40.568 { 00:23:40.568 "method": "bdev_nvme_set_hotplug", 00:23:40.568 "params": { 00:23:40.568 "period_us": 100000, 00:23:40.568 "enable": false 00:23:40.568 } 00:23:40.568 }, 00:23:40.568 { 00:23:40.568 "method": "bdev_malloc_create", 00:23:40.568 "params": { 00:23:40.568 "name": "malloc0", 00:23:40.568 "num_blocks": 8192, 00:23:40.568 "block_size": 4096, 00:23:40.568 "physical_block_size": 4096, 00:23:40.568 "uuid": "580a9b61-9b0f-49df-a384-f8641fb32168", 00:23:40.568 "optimal_io_boundary": 0, 00:23:40.568 "md_size": 0, 00:23:40.568 "dif_type": 0, 00:23:40.568 "dif_is_head_of_md": false, 00:23:40.568 "dif_pi_format": 0 00:23:40.568 } 00:23:40.568 }, 00:23:40.568 { 00:23:40.568 "method": "bdev_wait_for_examine" 00:23:40.568 } 00:23:40.568 ] 00:23:40.568 }, 00:23:40.568 { 00:23:40.568 "subsystem": "nbd", 00:23:40.568 "config": [] 00:23:40.568 }, 00:23:40.568 { 00:23:40.568 "subsystem": "scheduler", 00:23:40.568 "config": [ 00:23:40.568 { 00:23:40.568 "method": "framework_set_scheduler", 00:23:40.568 "params": { 00:23:40.568 "name": "static" 00:23:40.568 } 00:23:40.568 } 00:23:40.568 ] 00:23:40.568 }, 00:23:40.568 { 00:23:40.568 "subsystem": "nvmf", 00:23:40.568 "config": [ 00:23:40.568 { 00:23:40.568 "method": "nvmf_set_config", 00:23:40.568 "params": { 00:23:40.568 "discovery_filter": "match_any", 00:23:40.568 "admin_cmd_passthru": { 00:23:40.568 "identify_ctrlr": false 00:23:40.568 }, 00:23:40.568 "dhchap_digests": [ 00:23:40.568 "sha256", 00:23:40.568 "sha384", 00:23:40.568 "sha512" 00:23:40.568 ], 00:23:40.568 "dhchap_dhgroups": [ 00:23:40.568 "null", 00:23:40.568 "ffdhe2048", 00:23:40.568 "ffdhe3072", 00:23:40.568 "ffdhe4096", 00:23:40.568 "ffdhe6144", 00:23:40.568 "ffdhe8192" 00:23:40.568 ] 00:23:40.568 } 00:23:40.568 }, 00:23:40.568 { 00:23:40.568 "method": "nvmf_set_max_subsystems", 00:23:40.568 "params": { 00:23:40.568 "max_subsystems": 1024 00:23:40.568 } 00:23:40.568 }, 00:23:40.568 { 00:23:40.568 "method": "nvmf_set_crdt", 00:23:40.568 "params": { 00:23:40.568 "crdt1": 0, 00:23:40.568 "crdt2": 0, 00:23:40.568 "crdt3": 0 00:23:40.568 } 00:23:40.568 }, 00:23:40.568 { 00:23:40.568 "method": "nvmf_create_transport", 00:23:40.568 "params": { 00:23:40.568 "trtype": "TCP", 00:23:40.568 "max_queue_depth": 128, 00:23:40.568 "max_io_qpairs_per_ctrlr": 127, 00:23:40.568 "in_capsule_data_size": 4096, 00:23:40.568 "max_io_size": 131072, 00:23:40.568 "io_unit_size": 131072, 00:23:40.568 "max_aq_depth": 128, 00:23:40.568 "num_shared_buffers": 511, 00:23:40.568 "buf_cache_size": 4294967295, 00:23:40.568 "dif_insert_or_strip": false, 00:23:40.568 "zcopy": false, 00:23:40.568 "c2h_success": false, 00:23:40.568 "sock_priority": 0, 00:23:40.568 "abort_timeout_sec": 1, 00:23:40.568 "ack_timeout": 0, 00:23:40.568 "data_wr_pool_size": 0 00:23:40.568 } 00:23:40.568 }, 00:23:40.568 { 00:23:40.568 "method": "nvmf_create_subsystem", 00:23:40.568 "params": { 00:23:40.568 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.569 "allow_any_host": false, 00:23:40.569 "serial_number": "SPDK00000000000001", 00:23:40.569 "model_number": "SPDK bdev Controller", 00:23:40.569 "max_namespaces": 10, 00:23:40.569 "min_cntlid": 1, 00:23:40.569 "max_cntlid": 65519, 00:23:40.569 "ana_reporting": false 00:23:40.569 } 00:23:40.569 }, 00:23:40.569 { 00:23:40.569 "method": "nvmf_subsystem_add_host", 00:23:40.569 "params": { 00:23:40.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.569 "host": "nqn.2016-06.io.spdk:host1", 00:23:40.569 "psk": "key0" 00:23:40.569 } 00:23:40.569 }, 00:23:40.569 { 00:23:40.569 "method": "nvmf_subsystem_add_ns", 00:23:40.569 "params": { 00:23:40.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.569 "namespace": { 00:23:40.569 "nsid": 1, 00:23:40.569 "bdev_name": "malloc0", 00:23:40.569 "nguid": "580A9B619B0F49DFA384F8641FB32168", 00:23:40.569 "uuid": "580a9b61-9b0f-49df-a384-f8641fb32168", 00:23:40.569 "no_auto_visible": false 00:23:40.569 } 00:23:40.569 } 00:23:40.569 }, 00:23:40.569 { 00:23:40.569 "method": "nvmf_subsystem_add_listener", 00:23:40.569 "params": { 00:23:40.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.569 "listen_address": { 00:23:40.569 "trtype": "TCP", 00:23:40.569 "adrfam": "IPv4", 00:23:40.569 "traddr": "10.0.0.2", 00:23:40.569 "trsvcid": "4420" 00:23:40.569 }, 00:23:40.569 "secure_channel": true 00:23:40.569 } 00:23:40.569 } 00:23:40.569 ] 00:23:40.569 } 00:23:40.569 ] 00:23:40.569 }' 00:23:40.569 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.569 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=317454 00:23:40.569 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:40.569 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 317454 00:23:40.569 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 317454 ']' 00:23:40.569 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.569 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.569 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.569 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.569 03:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.569 [2024-12-14 03:04:55.592374] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:40.569 [2024-12-14 03:04:55.592416] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.569 [2024-12-14 03:04:55.670925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.569 [2024-12-14 03:04:55.691669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.569 [2024-12-14 03:04:55.691703] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.569 [2024-12-14 03:04:55.691710] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.569 [2024-12-14 03:04:55.691716] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.569 [2024-12-14 03:04:55.691721] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.569 [2024-12-14 03:04:55.692227] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.828 [2024-12-14 03:04:55.898496] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.828 [2024-12-14 03:04:55.930505] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:40.828 [2024-12-14 03:04:55.930695] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.395 03:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.395 03:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:41.395 03:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:41.395 03:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:41.395 03:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.395 03:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.395 03:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=317483 00:23:41.395 03:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 317483 /var/tmp/bdevperf.sock 00:23:41.395 03:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 317483 ']' 00:23:41.395 03:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.396 03:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:41.396 03:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.396 03:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.396 03:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:41.396 "subsystems": [ 00:23:41.396 { 00:23:41.396 "subsystem": "keyring", 00:23:41.396 "config": [ 00:23:41.396 { 00:23:41.396 "method": "keyring_file_add_key", 00:23:41.396 "params": { 00:23:41.396 "name": "key0", 00:23:41.396 "path": "/tmp/tmp.FFI3omtsdz" 00:23:41.396 } 00:23:41.396 } 00:23:41.396 ] 00:23:41.396 }, 00:23:41.396 { 00:23:41.396 "subsystem": "iobuf", 00:23:41.396 "config": [ 00:23:41.396 { 00:23:41.396 "method": "iobuf_set_options", 00:23:41.396 "params": { 00:23:41.396 "small_pool_count": 8192, 00:23:41.396 "large_pool_count": 1024, 00:23:41.396 "small_bufsize": 8192, 00:23:41.396 "large_bufsize": 135168, 00:23:41.396 "enable_numa": false 00:23:41.396 } 00:23:41.396 } 00:23:41.396 ] 00:23:41.396 }, 00:23:41.396 { 00:23:41.396 "subsystem": "sock", 00:23:41.396 "config": [ 00:23:41.396 { 00:23:41.396 "method": "sock_set_default_impl", 00:23:41.396 "params": { 00:23:41.396 "impl_name": "posix" 00:23:41.396 } 00:23:41.396 }, 00:23:41.396 { 00:23:41.396 "method": "sock_impl_set_options", 00:23:41.396 "params": { 00:23:41.396 "impl_name": "ssl", 00:23:41.396 "recv_buf_size": 4096, 00:23:41.396 "send_buf_size": 4096, 00:23:41.396 "enable_recv_pipe": true, 00:23:41.396 "enable_quickack": false, 00:23:41.396 "enable_placement_id": 0, 00:23:41.396 "enable_zerocopy_send_server": true, 00:23:41.396 "enable_zerocopy_send_client": false, 00:23:41.396 "zerocopy_threshold": 0, 00:23:41.396 "tls_version": 0, 00:23:41.396 "enable_ktls": false 00:23:41.396 } 00:23:41.396 }, 00:23:41.396 { 00:23:41.396 "method": "sock_impl_set_options", 00:23:41.396 "params": { 00:23:41.396 "impl_name": "posix", 00:23:41.396 "recv_buf_size": 2097152, 00:23:41.396 "send_buf_size": 2097152, 00:23:41.396 "enable_recv_pipe": true, 00:23:41.396 "enable_quickack": false, 00:23:41.396 "enable_placement_id": 0, 00:23:41.396 "enable_zerocopy_send_server": true, 00:23:41.396 "enable_zerocopy_send_client": false, 00:23:41.396 "zerocopy_threshold": 0, 00:23:41.396 "tls_version": 0, 00:23:41.396 "enable_ktls": false 00:23:41.396 } 00:23:41.396 } 00:23:41.396 ] 00:23:41.396 }, 00:23:41.396 { 00:23:41.396 "subsystem": "vmd", 00:23:41.396 "config": [] 00:23:41.396 }, 00:23:41.396 { 00:23:41.396 "subsystem": "accel", 00:23:41.396 "config": [ 00:23:41.396 { 00:23:41.396 "method": "accel_set_options", 00:23:41.396 "params": { 00:23:41.396 "small_cache_size": 128, 00:23:41.396 "large_cache_size": 16, 00:23:41.396 "task_count": 2048, 00:23:41.396 "sequence_count": 2048, 00:23:41.396 "buf_count": 2048 00:23:41.396 } 00:23:41.396 } 00:23:41.396 ] 00:23:41.396 }, 00:23:41.396 { 00:23:41.396 "subsystem": "bdev", 00:23:41.396 "config": [ 00:23:41.396 { 00:23:41.396 "method": "bdev_set_options", 00:23:41.396 "params": { 00:23:41.396 "bdev_io_pool_size": 65535, 00:23:41.396 "bdev_io_cache_size": 256, 00:23:41.396 "bdev_auto_examine": true, 00:23:41.396 "iobuf_small_cache_size": 128, 00:23:41.396 "iobuf_large_cache_size": 16 00:23:41.396 } 00:23:41.396 }, 00:23:41.396 { 00:23:41.396 "method": "bdev_raid_set_options", 00:23:41.396 "params": { 00:23:41.396 "process_window_size_kb": 1024, 00:23:41.396 "process_max_bandwidth_mb_sec": 0 00:23:41.396 } 00:23:41.396 }, 00:23:41.396 { 00:23:41.396 "method": "bdev_iscsi_set_options", 00:23:41.396 "params": { 00:23:41.396 "timeout_sec": 30 00:23:41.396 } 00:23:41.396 }, 00:23:41.396 { 00:23:41.396 "method": "bdev_nvme_set_options", 00:23:41.396 "params": { 00:23:41.396 "action_on_timeout": "none", 00:23:41.396 "timeout_us": 0, 00:23:41.396 "timeout_admin_us": 0, 00:23:41.396 "keep_alive_timeout_ms": 10000, 00:23:41.396 "arbitration_burst": 0, 00:23:41.396 "low_priority_weight": 0, 00:23:41.396 "medium_priority_weight": 0, 00:23:41.396 "high_priority_weight": 0, 00:23:41.396 "nvme_adminq_poll_period_us": 10000, 00:23:41.396 "nvme_ioq_poll_period_us": 0, 00:23:41.396 "io_queue_requests": 512, 00:23:41.396 "delay_cmd_submit": true, 00:23:41.396 "transport_retry_count": 4, 00:23:41.396 "bdev_retry_count": 3, 00:23:41.396 "transport_ack_timeout": 0, 00:23:41.396 "ctrlr_loss_timeout_sec": 0, 00:23:41.396 "reconnect_delay_sec": 0, 00:23:41.396 "fast_io_fail_timeout_sec": 0, 00:23:41.396 "disable_auto_failback": false, 00:23:41.396 "generate_uuids": false, 00:23:41.396 "transport_tos": 0, 00:23:41.396 "nvme_error_stat": false, 00:23:41.396 "rdma_srq_size": 0, 00:23:41.396 "io_path_stat": false, 00:23:41.396 "allow_accel_sequence": false, 00:23:41.396 "rdma_max_cq_size": 0, 00:23:41.396 "rdma_cm_event_timeout_ms": 0, 00:23:41.396 "dhchap_digests": [ 00:23:41.396 "sha256", 00:23:41.396 "sha384", 00:23:41.396 "sha512" 00:23:41.396 ], 00:23:41.396 "dhchap_dhgroups": [ 00:23:41.396 "null", 00:23:41.396 "ffdhe2048", 00:23:41.396 "ffdhe3072", 00:23:41.396 "ffdhe4096", 00:23:41.396 "ffdhe6144", 00:23:41.396 "ffdhe8192" 00:23:41.396 ], 00:23:41.396 "rdma_umr_per_io": false 00:23:41.396 } 00:23:41.396 }, 00:23:41.396 { 00:23:41.396 "method": "bdev_nvme_attach_controller", 00:23:41.396 "params": { 00:23:41.396 "name": "TLSTEST", 00:23:41.396 "trtype": "TCP", 00:23:41.396 "adrfam": "IPv4", 00:23:41.396 "traddr": "10.0.0.2", 00:23:41.396 "trsvcid": "4420", 00:23:41.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.396 "prchk_reftag": false, 00:23:41.396 "prchk_guard": false, 00:23:41.396 "ctrlr_loss_timeout_sec": 0, 00:23:41.396 "reconnect_delay_sec": 0, 00:23:41.396 "fast_io_fail_timeout_sec": 0, 00:23:41.396 "psk": "key0", 00:23:41.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:41.396 "hdgst": false, 00:23:41.396 "ddgst": false, 00:23:41.396 "multipath": "multipath" 00:23:41.396 } 00:23:41.396 }, 00:23:41.396 { 00:23:41.396 "method": "bdev_nvme_set_hotplug", 00:23:41.396 "params": { 00:23:41.396 "period_us": 100000, 00:23:41.396 "enable": false 00:23:41.396 } 00:23:41.396 }, 00:23:41.396 { 00:23:41.396 "method": "bdev_wait_for_examine" 00:23:41.396 } 00:23:41.396 ] 00:23:41.396 }, 00:23:41.396 { 00:23:41.396 "subsystem": "nbd", 00:23:41.396 "config": [] 00:23:41.396 } 00:23:41.396 ] 00:23:41.396 }' 00:23:41.396 03:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.396 03:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.396 [2024-12-14 03:04:56.505209] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:41.396 [2024-12-14 03:04:56.505257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317483 ] 00:23:41.655 [2024-12-14 03:04:56.581518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.655 [2024-12-14 03:04:56.603246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.655 [2024-12-14 03:04:56.751713] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:42.221 03:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:42.221 03:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:42.221 03:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:42.479 Running I/O for 10 seconds... 00:23:44.350 5470.00 IOPS, 21.37 MiB/s [2024-12-14T02:05:00.860Z] 5481.00 IOPS, 21.41 MiB/s [2024-12-14T02:05:01.794Z] 5457.67 IOPS, 21.32 MiB/s [2024-12-14T02:05:02.730Z] 5451.25 IOPS, 21.29 MiB/s [2024-12-14T02:05:03.665Z] 5375.40 IOPS, 21.00 MiB/s [2024-12-14T02:05:04.601Z] 5361.83 IOPS, 20.94 MiB/s [2024-12-14T02:05:05.619Z] 5310.71 IOPS, 20.74 MiB/s [2024-12-14T02:05:06.554Z] 5311.62 IOPS, 20.75 MiB/s [2024-12-14T02:05:07.490Z] 5293.89 IOPS, 20.68 MiB/s [2024-12-14T02:05:07.490Z] 5293.40 IOPS, 20.68 MiB/s 00:23:52.357 Latency(us) 00:23:52.357 [2024-12-14T02:05:07.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.357 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:52.357 Verification LBA range: start 0x0 length 0x2000 00:23:52.357 TLSTESTn1 : 10.01 5298.72 20.70 0.00 0.00 24121.87 5336.50 31332.45 00:23:52.357 [2024-12-14T02:05:07.490Z] =================================================================================================================== 00:23:52.357 [2024-12-14T02:05:07.490Z] Total : 5298.72 20.70 0.00 0.00 24121.87 5336.50 31332.45 00:23:52.357 { 00:23:52.357 "results": [ 00:23:52.357 { 00:23:52.357 "job": "TLSTESTn1", 00:23:52.357 "core_mask": "0x4", 00:23:52.357 "workload": "verify", 00:23:52.357 "status": "finished", 00:23:52.357 "verify_range": { 00:23:52.357 "start": 0, 00:23:52.357 "length": 8192 00:23:52.357 }, 00:23:52.357 "queue_depth": 128, 00:23:52.357 "io_size": 4096, 00:23:52.357 "runtime": 10.013921, 00:23:52.357 "iops": 5298.723646811274, 00:23:52.357 "mibps": 20.69813924535654, 00:23:52.357 "io_failed": 0, 00:23:52.357 "io_timeout": 0, 00:23:52.357 "avg_latency_us": 24121.87094580272, 00:23:52.357 "min_latency_us": 5336.5028571428575, 00:23:52.357 "max_latency_us": 31332.449523809522 00:23:52.357 } 00:23:52.357 ], 00:23:52.357 "core_count": 1 00:23:52.357 } 00:23:52.616 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:52.616 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 317483 00:23:52.616 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 317483 ']' 00:23:52.616 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 317483 00:23:52.616 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:52.616 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.616 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317483 00:23:52.616 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:52.616 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:52.616 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317483' 00:23:52.616 killing process with pid 317483 00:23:52.616 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 317483 00:23:52.616 Received shutdown signal, test time was about 10.000000 seconds 00:23:52.616 00:23:52.616 Latency(us) 00:23:52.616 [2024-12-14T02:05:07.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.616 [2024-12-14T02:05:07.749Z] =================================================================================================================== 00:23:52.616 [2024-12-14T02:05:07.749Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:52.616 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 317483 00:23:52.616 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 317454 00:23:52.616 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 317454 ']' 00:23:52.616 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 317454 00:23:52.616 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:52.616 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.616 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317454 00:23:52.616 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:52.616 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:52.616 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317454' 00:23:52.616 killing process with pid 317454 00:23:52.616 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 317454 00:23:52.616 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 317454 00:23:52.875 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:52.875 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:52.875 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.875 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.875 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=317636 00:23:52.875 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 317636 00:23:52.875 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:52.875 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 317636 ']' 00:23:52.875 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.875 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.875 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.875 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.875 03:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.875 [2024-12-14 03:05:07.961182] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:52.875 [2024-12-14 03:05:07.961227] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.134 [2024-12-14 03:05:08.035148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.134 [2024-12-14 03:05:08.063458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.134 [2024-12-14 03:05:08.063500] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.134 [2024-12-14 03:05:08.063511] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.134 [2024-12-14 03:05:08.063521] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.134 [2024-12-14 03:05:08.063530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.134 [2024-12-14 03:05:08.064187] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.134 03:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.134 03:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:53.134 03:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:53.134 03:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:53.134 03:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.135 03:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.135 03:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.FFI3omtsdz 00:23:53.135 03:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.FFI3omtsdz 00:23:53.135 03:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:53.394 [2024-12-14 03:05:08.390747] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.394 03:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:53.652 03:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:53.652 [2024-12-14 03:05:08.779736] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:53.652 [2024-12-14 03:05:08.779930] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.911 03:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:53.911 malloc0 00:23:53.911 03:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:54.169 03:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.FFI3omtsdz 00:23:54.428 03:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:54.687 03:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:54.687 03:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=317686 00:23:54.687 03:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:54.687 03:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 317686 /var/tmp/bdevperf.sock 00:23:54.687 03:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 317686 ']' 00:23:54.687 03:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.687 03:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.687 03:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.687 03:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.687 03:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.687 [2024-12-14 03:05:09.616475] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:54.687 [2024-12-14 03:05:09.616524] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317686 ] 00:23:54.687 [2024-12-14 03:05:09.690583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.687 [2024-12-14 03:05:09.712412] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.687 03:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.687 03:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:54.687 03:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FFI3omtsdz 00:23:54.945 03:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:55.204 [2024-12-14 03:05:10.163046] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:55.204 nvme0n1 00:23:55.204 03:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:55.462 Running I/O for 1 seconds... 00:23:56.399 5275.00 IOPS, 20.61 MiB/s 00:23:56.399 Latency(us) 00:23:56.399 [2024-12-14T02:05:11.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.399 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:56.399 Verification LBA range: start 0x0 length 0x2000 00:23:56.399 nvme0n1 : 1.01 5333.28 20.83 0.00 0.00 23837.07 4743.56 28711.01 00:23:56.399 [2024-12-14T02:05:11.532Z] =================================================================================================================== 00:23:56.399 [2024-12-14T02:05:11.532Z] Total : 5333.28 20.83 0.00 0.00 23837.07 4743.56 28711.01 00:23:56.399 { 00:23:56.399 "results": [ 00:23:56.399 { 00:23:56.399 "job": "nvme0n1", 00:23:56.399 "core_mask": "0x2", 00:23:56.399 "workload": "verify", 00:23:56.399 "status": "finished", 00:23:56.399 "verify_range": { 00:23:56.399 "start": 0, 00:23:56.399 "length": 8192 00:23:56.399 }, 00:23:56.399 "queue_depth": 128, 00:23:56.399 "io_size": 4096, 00:23:56.399 "runtime": 1.013073, 00:23:56.399 "iops": 5333.278055974249, 00:23:56.399 "mibps": 20.83311740614941, 00:23:56.399 "io_failed": 0, 00:23:56.399 "io_timeout": 0, 00:23:56.399 "avg_latency_us": 23837.067347769756, 00:23:56.399 "min_latency_us": 4743.558095238095, 00:23:56.399 "max_latency_us": 28711.009523809524 00:23:56.399 } 00:23:56.399 ], 00:23:56.399 "core_count": 1 00:23:56.399 } 00:23:56.399 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 317686 00:23:56.399 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 317686 ']' 00:23:56.399 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 317686 00:23:56.399 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:56.399 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.399 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317686 00:23:56.399 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:56.399 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:56.399 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317686' 00:23:56.399 killing process with pid 317686 00:23:56.399 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 317686 00:23:56.399 Received shutdown signal, test time was about 1.000000 seconds 00:23:56.399 00:23:56.399 Latency(us) 00:23:56.399 [2024-12-14T02:05:11.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.399 [2024-12-14T02:05:11.532Z] =================================================================================================================== 00:23:56.399 [2024-12-14T02:05:11.532Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:56.399 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 317686 00:23:56.658 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 317636 00:23:56.658 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 317636 ']' 00:23:56.658 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 317636 00:23:56.658 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:56.658 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.658 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317636 00:23:56.658 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:56.658 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:56.658 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317636' 00:23:56.658 killing process with pid 317636 00:23:56.658 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 317636 00:23:56.658 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 317636 00:23:56.917 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:56.917 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:56.917 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:56.917 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.917 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=317728 00:23:56.917 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:56.917 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 317728 00:23:56.917 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 317728 ']' 00:23:56.917 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.917 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.917 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.917 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.917 03:05:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.917 [2024-12-14 03:05:11.846645] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:56.917 [2024-12-14 03:05:11.846691] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.917 [2024-12-14 03:05:11.908160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.917 [2024-12-14 03:05:11.928963] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.917 [2024-12-14 03:05:11.928998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.917 [2024-12-14 03:05:11.929006] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.917 [2024-12-14 03:05:11.929012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.917 [2024-12-14 03:05:11.929017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.917 [2024-12-14 03:05:11.929495] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.917 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.917 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:56.917 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:56.917 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:56.917 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.917 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.917 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:56.917 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.176 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.176 [2024-12-14 03:05:12.055521] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.176 malloc0 00:23:57.176 [2024-12-14 03:05:12.083425] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:57.176 [2024-12-14 03:05:12.083608] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.176 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.176 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=317747 00:23:57.176 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 317747 /var/tmp/bdevperf.sock 00:23:57.176 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:57.177 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 317747 ']' 00:23:57.177 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.177 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:57.177 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.177 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:57.177 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.177 [2024-12-14 03:05:12.160044] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:57.177 [2024-12-14 03:05:12.160084] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317747 ] 00:23:57.177 [2024-12-14 03:05:12.234836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.177 [2024-12-14 03:05:12.257621] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.436 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:57.436 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:57.436 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FFI3omtsdz 00:23:57.436 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:57.695 [2024-12-14 03:05:12.705635] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:57.695 nvme0n1 00:23:57.695 03:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:57.953 Running I/O for 1 seconds... 00:23:58.889 4852.00 IOPS, 18.95 MiB/s 00:23:58.889 Latency(us) 00:23:58.889 [2024-12-14T02:05:14.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.889 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:58.889 Verification LBA range: start 0x0 length 0x2000 00:23:58.889 nvme0n1 : 1.01 4920.74 19.22 0.00 0.00 25845.73 4930.80 27337.87 00:23:58.889 [2024-12-14T02:05:14.022Z] =================================================================================================================== 00:23:58.889 [2024-12-14T02:05:14.022Z] Total : 4920.74 19.22 0.00 0.00 25845.73 4930.80 27337.87 00:23:58.889 { 00:23:58.889 "results": [ 00:23:58.889 { 00:23:58.889 "job": "nvme0n1", 00:23:58.889 "core_mask": "0x2", 00:23:58.889 "workload": "verify", 00:23:58.889 "status": "finished", 00:23:58.889 "verify_range": { 00:23:58.889 "start": 0, 00:23:58.889 "length": 8192 00:23:58.889 }, 00:23:58.889 "queue_depth": 128, 00:23:58.889 "io_size": 4096, 00:23:58.889 "runtime": 1.012042, 00:23:58.889 "iops": 4920.7443959835655, 00:23:58.889 "mibps": 19.221657796810803, 00:23:58.889 "io_failed": 0, 00:23:58.889 "io_timeout": 0, 00:23:58.889 "avg_latency_us": 25845.73336699178, 00:23:58.889 "min_latency_us": 4930.80380952381, 00:23:58.889 "max_latency_us": 27337.874285714286 00:23:58.889 } 00:23:58.889 ], 00:23:58.889 "core_count": 1 00:23:58.889 } 00:23:58.889 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:58.889 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.889 03:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.148 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.148 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:59.148 "subsystems": [ 00:23:59.148 { 00:23:59.148 "subsystem": "keyring", 00:23:59.148 "config": [ 00:23:59.148 { 00:23:59.148 "method": "keyring_file_add_key", 00:23:59.148 "params": { 00:23:59.148 "name": "key0", 00:23:59.148 "path": "/tmp/tmp.FFI3omtsdz" 00:23:59.148 } 00:23:59.148 } 00:23:59.148 ] 00:23:59.148 }, 00:23:59.148 { 00:23:59.148 "subsystem": "iobuf", 00:23:59.148 "config": [ 00:23:59.148 { 00:23:59.148 "method": "iobuf_set_options", 00:23:59.148 "params": { 00:23:59.148 "small_pool_count": 8192, 00:23:59.148 "large_pool_count": 1024, 00:23:59.148 "small_bufsize": 8192, 00:23:59.148 "large_bufsize": 135168, 00:23:59.148 "enable_numa": false 00:23:59.148 } 00:23:59.148 } 00:23:59.148 ] 00:23:59.148 }, 00:23:59.148 { 00:23:59.148 "subsystem": "sock", 00:23:59.148 "config": [ 00:23:59.148 { 00:23:59.148 "method": "sock_set_default_impl", 00:23:59.148 "params": { 00:23:59.148 "impl_name": "posix" 00:23:59.148 } 00:23:59.148 }, 00:23:59.148 { 00:23:59.148 "method": "sock_impl_set_options", 00:23:59.148 "params": { 00:23:59.148 "impl_name": "ssl", 00:23:59.148 "recv_buf_size": 4096, 00:23:59.148 "send_buf_size": 4096, 00:23:59.148 "enable_recv_pipe": true, 00:23:59.148 "enable_quickack": false, 00:23:59.148 "enable_placement_id": 0, 00:23:59.148 "enable_zerocopy_send_server": true, 00:23:59.148 "enable_zerocopy_send_client": false, 00:23:59.148 "zerocopy_threshold": 0, 00:23:59.148 "tls_version": 0, 00:23:59.148 "enable_ktls": false 00:23:59.148 } 00:23:59.148 }, 00:23:59.148 { 00:23:59.148 "method": "sock_impl_set_options", 00:23:59.148 "params": { 00:23:59.148 "impl_name": "posix", 00:23:59.148 "recv_buf_size": 2097152, 00:23:59.148 "send_buf_size": 2097152, 00:23:59.148 "enable_recv_pipe": true, 00:23:59.148 "enable_quickack": false, 00:23:59.148 "enable_placement_id": 0, 00:23:59.148 "enable_zerocopy_send_server": true, 00:23:59.148 "enable_zerocopy_send_client": false, 00:23:59.148 "zerocopy_threshold": 0, 00:23:59.148 "tls_version": 0, 00:23:59.148 "enable_ktls": false 00:23:59.148 } 00:23:59.148 } 00:23:59.148 ] 00:23:59.148 }, 00:23:59.148 { 00:23:59.148 "subsystem": "vmd", 00:23:59.148 "config": [] 00:23:59.148 }, 00:23:59.148 { 00:23:59.148 "subsystem": "accel", 00:23:59.148 "config": [ 00:23:59.148 { 00:23:59.148 "method": "accel_set_options", 00:23:59.148 "params": { 00:23:59.148 "small_cache_size": 128, 00:23:59.148 "large_cache_size": 16, 00:23:59.148 "task_count": 2048, 00:23:59.148 "sequence_count": 2048, 00:23:59.148 "buf_count": 2048 00:23:59.148 } 00:23:59.148 } 00:23:59.148 ] 00:23:59.148 }, 00:23:59.148 { 00:23:59.148 "subsystem": "bdev", 00:23:59.148 "config": [ 00:23:59.148 { 00:23:59.148 "method": "bdev_set_options", 00:23:59.148 "params": { 00:23:59.148 "bdev_io_pool_size": 65535, 00:23:59.148 "bdev_io_cache_size": 256, 00:23:59.148 "bdev_auto_examine": true, 00:23:59.148 "iobuf_small_cache_size": 128, 00:23:59.148 "iobuf_large_cache_size": 16 00:23:59.148 } 00:23:59.148 }, 00:23:59.148 { 00:23:59.148 "method": "bdev_raid_set_options", 00:23:59.148 "params": { 00:23:59.148 "process_window_size_kb": 1024, 00:23:59.148 "process_max_bandwidth_mb_sec": 0 00:23:59.148 } 00:23:59.148 }, 00:23:59.148 { 00:23:59.148 "method": "bdev_iscsi_set_options", 00:23:59.148 "params": { 00:23:59.148 "timeout_sec": 30 00:23:59.148 } 00:23:59.148 }, 00:23:59.148 { 00:23:59.148 "method": "bdev_nvme_set_options", 00:23:59.148 "params": { 00:23:59.148 "action_on_timeout": "none", 00:23:59.148 "timeout_us": 0, 00:23:59.148 "timeout_admin_us": 0, 00:23:59.148 "keep_alive_timeout_ms": 10000, 00:23:59.148 "arbitration_burst": 0, 00:23:59.148 "low_priority_weight": 0, 00:23:59.148 "medium_priority_weight": 0, 00:23:59.148 "high_priority_weight": 0, 00:23:59.148 "nvme_adminq_poll_period_us": 10000, 00:23:59.148 "nvme_ioq_poll_period_us": 0, 00:23:59.148 "io_queue_requests": 0, 00:23:59.148 "delay_cmd_submit": true, 00:23:59.148 "transport_retry_count": 4, 00:23:59.148 "bdev_retry_count": 3, 00:23:59.148 "transport_ack_timeout": 0, 00:23:59.148 "ctrlr_loss_timeout_sec": 0, 00:23:59.148 "reconnect_delay_sec": 0, 00:23:59.148 "fast_io_fail_timeout_sec": 0, 00:23:59.148 "disable_auto_failback": false, 00:23:59.148 "generate_uuids": false, 00:23:59.148 "transport_tos": 0, 00:23:59.148 "nvme_error_stat": false, 00:23:59.148 "rdma_srq_size": 0, 00:23:59.148 "io_path_stat": false, 00:23:59.148 "allow_accel_sequence": false, 00:23:59.148 "rdma_max_cq_size": 0, 00:23:59.148 "rdma_cm_event_timeout_ms": 0, 00:23:59.148 "dhchap_digests": [ 00:23:59.148 "sha256", 00:23:59.148 "sha384", 00:23:59.148 "sha512" 00:23:59.148 ], 00:23:59.148 "dhchap_dhgroups": [ 00:23:59.148 "null", 00:23:59.148 "ffdhe2048", 00:23:59.148 "ffdhe3072", 00:23:59.148 "ffdhe4096", 00:23:59.148 "ffdhe6144", 00:23:59.148 "ffdhe8192" 00:23:59.148 ], 00:23:59.148 "rdma_umr_per_io": false 00:23:59.148 } 00:23:59.148 }, 00:23:59.148 { 00:23:59.148 "method": "bdev_nvme_set_hotplug", 00:23:59.148 "params": { 00:23:59.148 "period_us": 100000, 00:23:59.149 "enable": false 00:23:59.149 } 00:23:59.149 }, 00:23:59.149 { 00:23:59.149 "method": "bdev_malloc_create", 00:23:59.149 "params": { 00:23:59.149 "name": "malloc0", 00:23:59.149 "num_blocks": 8192, 00:23:59.149 "block_size": 4096, 00:23:59.149 "physical_block_size": 4096, 00:23:59.149 "uuid": "13791d2a-b0f0-4c66-8de1-152cf7e8fca7", 00:23:59.149 "optimal_io_boundary": 0, 00:23:59.149 "md_size": 0, 00:23:59.149 "dif_type": 0, 00:23:59.149 "dif_is_head_of_md": false, 00:23:59.149 "dif_pi_format": 0 00:23:59.149 } 00:23:59.149 }, 00:23:59.149 { 00:23:59.149 "method": "bdev_wait_for_examine" 00:23:59.149 } 00:23:59.149 ] 00:23:59.149 }, 00:23:59.149 { 00:23:59.149 "subsystem": "nbd", 00:23:59.149 "config": [] 00:23:59.149 }, 00:23:59.149 { 00:23:59.149 "subsystem": "scheduler", 00:23:59.149 "config": [ 00:23:59.149 { 00:23:59.149 "method": "framework_set_scheduler", 00:23:59.149 "params": { 00:23:59.149 "name": "static" 00:23:59.149 } 00:23:59.149 } 00:23:59.149 ] 00:23:59.149 }, 00:23:59.149 { 00:23:59.149 "subsystem": "nvmf", 00:23:59.149 "config": [ 00:23:59.149 { 00:23:59.149 "method": "nvmf_set_config", 00:23:59.149 "params": { 00:23:59.149 "discovery_filter": "match_any", 00:23:59.149 "admin_cmd_passthru": { 00:23:59.149 "identify_ctrlr": false 00:23:59.149 }, 00:23:59.149 "dhchap_digests": [ 00:23:59.149 "sha256", 00:23:59.149 "sha384", 00:23:59.149 "sha512" 00:23:59.149 ], 00:23:59.149 "dhchap_dhgroups": [ 00:23:59.149 "null", 00:23:59.149 "ffdhe2048", 00:23:59.149 "ffdhe3072", 00:23:59.149 "ffdhe4096", 00:23:59.149 "ffdhe6144", 00:23:59.149 "ffdhe8192" 00:23:59.149 ] 00:23:59.149 } 00:23:59.149 }, 00:23:59.149 { 00:23:59.149 "method": "nvmf_set_max_subsystems", 00:23:59.149 "params": { 00:23:59.149 "max_subsystems": 1024 00:23:59.149 } 00:23:59.149 }, 00:23:59.149 { 00:23:59.149 "method": "nvmf_set_crdt", 00:23:59.149 "params": { 00:23:59.149 "crdt1": 0, 00:23:59.149 "crdt2": 0, 00:23:59.149 "crdt3": 0 00:23:59.149 } 00:23:59.149 }, 00:23:59.149 { 00:23:59.149 "method": "nvmf_create_transport", 00:23:59.149 "params": { 00:23:59.149 "trtype": "TCP", 00:23:59.149 "max_queue_depth": 128, 00:23:59.149 "max_io_qpairs_per_ctrlr": 127, 00:23:59.149 "in_capsule_data_size": 4096, 00:23:59.149 "max_io_size": 131072, 00:23:59.149 "io_unit_size": 131072, 00:23:59.149 "max_aq_depth": 128, 00:23:59.149 "num_shared_buffers": 511, 00:23:59.149 "buf_cache_size": 4294967295, 00:23:59.149 "dif_insert_or_strip": false, 00:23:59.149 "zcopy": false, 00:23:59.149 "c2h_success": false, 00:23:59.149 "sock_priority": 0, 00:23:59.149 "abort_timeout_sec": 1, 00:23:59.149 "ack_timeout": 0, 00:23:59.149 "data_wr_pool_size": 0 00:23:59.149 } 00:23:59.149 }, 00:23:59.149 { 00:23:59.149 "method": "nvmf_create_subsystem", 00:23:59.149 "params": { 00:23:59.149 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.149 "allow_any_host": false, 00:23:59.149 "serial_number": "00000000000000000000", 00:23:59.149 "model_number": "SPDK bdev Controller", 00:23:59.149 "max_namespaces": 32, 00:23:59.149 "min_cntlid": 1, 00:23:59.149 "max_cntlid": 65519, 00:23:59.149 "ana_reporting": false 00:23:59.149 } 00:23:59.149 }, 00:23:59.149 { 00:23:59.149 "method": "nvmf_subsystem_add_host", 00:23:59.149 "params": { 00:23:59.149 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.149 "host": "nqn.2016-06.io.spdk:host1", 00:23:59.149 "psk": "key0" 00:23:59.149 } 00:23:59.149 }, 00:23:59.149 { 00:23:59.149 "method": "nvmf_subsystem_add_ns", 00:23:59.149 "params": { 00:23:59.149 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.149 "namespace": { 00:23:59.149 "nsid": 1, 00:23:59.149 "bdev_name": "malloc0", 00:23:59.149 "nguid": "13791D2AB0F04C668DE1152CF7E8FCA7", 00:23:59.149 "uuid": "13791d2a-b0f0-4c66-8de1-152cf7e8fca7", 00:23:59.149 "no_auto_visible": false 00:23:59.149 } 00:23:59.149 } 00:23:59.149 }, 00:23:59.149 { 00:23:59.149 "method": "nvmf_subsystem_add_listener", 00:23:59.149 "params": { 00:23:59.149 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.149 "listen_address": { 00:23:59.149 "trtype": "TCP", 00:23:59.149 "adrfam": "IPv4", 00:23:59.149 "traddr": "10.0.0.2", 00:23:59.149 "trsvcid": "4420" 00:23:59.149 }, 00:23:59.149 "secure_channel": false, 00:23:59.149 "sock_impl": "ssl" 00:23:59.149 } 00:23:59.149 } 00:23:59.149 ] 00:23:59.149 } 00:23:59.149 ] 00:23:59.149 }' 00:23:59.149 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:59.408 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:59.408 "subsystems": [ 00:23:59.408 { 00:23:59.408 "subsystem": "keyring", 00:23:59.408 "config": [ 00:23:59.408 { 00:23:59.408 "method": "keyring_file_add_key", 00:23:59.408 "params": { 00:23:59.408 "name": "key0", 00:23:59.408 "path": "/tmp/tmp.FFI3omtsdz" 00:23:59.408 } 00:23:59.408 } 00:23:59.408 ] 00:23:59.408 }, 00:23:59.408 { 00:23:59.408 "subsystem": "iobuf", 00:23:59.408 "config": [ 00:23:59.408 { 00:23:59.408 "method": "iobuf_set_options", 00:23:59.408 "params": { 00:23:59.408 "small_pool_count": 8192, 00:23:59.408 "large_pool_count": 1024, 00:23:59.408 "small_bufsize": 8192, 00:23:59.408 "large_bufsize": 135168, 00:23:59.408 "enable_numa": false 00:23:59.408 } 00:23:59.408 } 00:23:59.408 ] 00:23:59.408 }, 00:23:59.408 { 00:23:59.408 "subsystem": "sock", 00:23:59.408 "config": [ 00:23:59.408 { 00:23:59.408 "method": "sock_set_default_impl", 00:23:59.408 "params": { 00:23:59.408 "impl_name": "posix" 00:23:59.408 } 00:23:59.408 }, 00:23:59.408 { 00:23:59.408 "method": "sock_impl_set_options", 00:23:59.408 "params": { 00:23:59.408 "impl_name": "ssl", 00:23:59.408 "recv_buf_size": 4096, 00:23:59.408 "send_buf_size": 4096, 00:23:59.408 "enable_recv_pipe": true, 00:23:59.408 "enable_quickack": false, 00:23:59.408 "enable_placement_id": 0, 00:23:59.408 "enable_zerocopy_send_server": true, 00:23:59.408 "enable_zerocopy_send_client": false, 00:23:59.408 "zerocopy_threshold": 0, 00:23:59.408 "tls_version": 0, 00:23:59.408 "enable_ktls": false 00:23:59.408 } 00:23:59.408 }, 00:23:59.408 { 00:23:59.408 "method": "sock_impl_set_options", 00:23:59.408 "params": { 00:23:59.408 "impl_name": "posix", 00:23:59.408 "recv_buf_size": 2097152, 00:23:59.408 "send_buf_size": 2097152, 00:23:59.408 "enable_recv_pipe": true, 00:23:59.408 "enable_quickack": false, 00:23:59.408 "enable_placement_id": 0, 00:23:59.408 "enable_zerocopy_send_server": true, 00:23:59.408 "enable_zerocopy_send_client": false, 00:23:59.408 "zerocopy_threshold": 0, 00:23:59.408 "tls_version": 0, 00:23:59.408 "enable_ktls": false 00:23:59.408 } 00:23:59.408 } 00:23:59.408 ] 00:23:59.408 }, 00:23:59.408 { 00:23:59.408 "subsystem": "vmd", 00:23:59.408 "config": [] 00:23:59.408 }, 00:23:59.408 { 00:23:59.408 "subsystem": "accel", 00:23:59.408 "config": [ 00:23:59.408 { 00:23:59.409 "method": "accel_set_options", 00:23:59.409 "params": { 00:23:59.409 "small_cache_size": 128, 00:23:59.409 "large_cache_size": 16, 00:23:59.409 "task_count": 2048, 00:23:59.409 "sequence_count": 2048, 00:23:59.409 "buf_count": 2048 00:23:59.409 } 00:23:59.409 } 00:23:59.409 ] 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "subsystem": "bdev", 00:23:59.409 "config": [ 00:23:59.409 { 00:23:59.409 "method": "bdev_set_options", 00:23:59.409 "params": { 00:23:59.409 "bdev_io_pool_size": 65535, 00:23:59.409 "bdev_io_cache_size": 256, 00:23:59.409 "bdev_auto_examine": true, 00:23:59.409 "iobuf_small_cache_size": 128, 00:23:59.409 "iobuf_large_cache_size": 16 00:23:59.409 } 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "method": "bdev_raid_set_options", 00:23:59.409 "params": { 00:23:59.409 "process_window_size_kb": 1024, 00:23:59.409 "process_max_bandwidth_mb_sec": 0 00:23:59.409 } 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "method": "bdev_iscsi_set_options", 00:23:59.409 "params": { 00:23:59.409 "timeout_sec": 30 00:23:59.409 } 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "method": "bdev_nvme_set_options", 00:23:59.409 "params": { 00:23:59.409 "action_on_timeout": "none", 00:23:59.409 "timeout_us": 0, 00:23:59.409 "timeout_admin_us": 0, 00:23:59.409 "keep_alive_timeout_ms": 10000, 00:23:59.409 "arbitration_burst": 0, 00:23:59.409 "low_priority_weight": 0, 00:23:59.409 "medium_priority_weight": 0, 00:23:59.409 "high_priority_weight": 0, 00:23:59.409 "nvme_adminq_poll_period_us": 10000, 00:23:59.409 "nvme_ioq_poll_period_us": 0, 00:23:59.409 "io_queue_requests": 512, 00:23:59.409 "delay_cmd_submit": true, 00:23:59.409 "transport_retry_count": 4, 00:23:59.409 "bdev_retry_count": 3, 00:23:59.409 "transport_ack_timeout": 0, 00:23:59.409 "ctrlr_loss_timeout_sec": 0, 00:23:59.409 "reconnect_delay_sec": 0, 00:23:59.409 "fast_io_fail_timeout_sec": 0, 00:23:59.409 "disable_auto_failback": false, 00:23:59.409 "generate_uuids": false, 00:23:59.409 "transport_tos": 0, 00:23:59.409 "nvme_error_stat": false, 00:23:59.409 "rdma_srq_size": 0, 00:23:59.409 "io_path_stat": false, 00:23:59.409 "allow_accel_sequence": false, 00:23:59.409 "rdma_max_cq_size": 0, 00:23:59.409 "rdma_cm_event_timeout_ms": 0, 00:23:59.409 "dhchap_digests": [ 00:23:59.409 "sha256", 00:23:59.409 "sha384", 00:23:59.409 "sha512" 00:23:59.409 ], 00:23:59.409 "dhchap_dhgroups": [ 00:23:59.409 "null", 00:23:59.409 "ffdhe2048", 00:23:59.409 "ffdhe3072", 00:23:59.409 "ffdhe4096", 00:23:59.409 "ffdhe6144", 00:23:59.409 "ffdhe8192" 00:23:59.409 ], 00:23:59.409 "rdma_umr_per_io": false 00:23:59.409 } 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "method": "bdev_nvme_attach_controller", 00:23:59.409 "params": { 00:23:59.409 "name": "nvme0", 00:23:59.409 "trtype": "TCP", 00:23:59.409 "adrfam": "IPv4", 00:23:59.409 "traddr": "10.0.0.2", 00:23:59.409 "trsvcid": "4420", 00:23:59.409 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.409 "prchk_reftag": false, 00:23:59.409 "prchk_guard": false, 00:23:59.409 "ctrlr_loss_timeout_sec": 0, 00:23:59.409 "reconnect_delay_sec": 0, 00:23:59.409 "fast_io_fail_timeout_sec": 0, 00:23:59.409 "psk": "key0", 00:23:59.409 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:59.409 "hdgst": false, 00:23:59.409 "ddgst": false, 00:23:59.409 "multipath": "multipath" 00:23:59.409 } 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "method": "bdev_nvme_set_hotplug", 00:23:59.409 "params": { 00:23:59.409 "period_us": 100000, 00:23:59.409 "enable": false 00:23:59.409 } 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "method": "bdev_enable_histogram", 00:23:59.409 "params": { 00:23:59.409 "name": "nvme0n1", 00:23:59.409 "enable": true 00:23:59.409 } 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "method": "bdev_wait_for_examine" 00:23:59.409 } 00:23:59.409 ] 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "subsystem": "nbd", 00:23:59.409 "config": [] 00:23:59.409 } 00:23:59.409 ] 00:23:59.409 }' 00:23:59.409 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 317747 00:23:59.409 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 317747 ']' 00:23:59.409 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 317747 00:23:59.409 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:59.409 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.409 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317747 00:23:59.409 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:59.409 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:59.409 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317747' 00:23:59.409 killing process with pid 317747 00:23:59.409 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 317747 00:23:59.409 Received shutdown signal, test time was about 1.000000 seconds 00:23:59.409 00:23:59.409 Latency(us) 00:23:59.409 [2024-12-14T02:05:14.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.409 [2024-12-14T02:05:14.542Z] =================================================================================================================== 00:23:59.409 [2024-12-14T02:05:14.542Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:59.409 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 317747 00:23:59.409 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 317728 00:23:59.409 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 317728 ']' 00:23:59.409 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 317728 00:23:59.409 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:59.409 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.409 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317728 00:23:59.668 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:59.668 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:59.668 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317728' 00:23:59.668 killing process with pid 317728 00:23:59.668 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 317728 00:23:59.668 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 317728 00:23:59.668 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:59.668 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:59.668 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:59.668 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:59.668 "subsystems": [ 00:23:59.668 { 00:23:59.668 "subsystem": "keyring", 00:23:59.668 "config": [ 00:23:59.668 { 00:23:59.668 "method": "keyring_file_add_key", 00:23:59.668 "params": { 00:23:59.668 "name": "key0", 00:23:59.668 "path": "/tmp/tmp.FFI3omtsdz" 00:23:59.668 } 00:23:59.668 } 00:23:59.668 ] 00:23:59.668 }, 00:23:59.668 { 00:23:59.668 "subsystem": "iobuf", 00:23:59.668 "config": [ 00:23:59.668 { 00:23:59.668 "method": "iobuf_set_options", 00:23:59.668 "params": { 00:23:59.668 "small_pool_count": 8192, 00:23:59.668 "large_pool_count": 1024, 00:23:59.668 "small_bufsize": 8192, 00:23:59.669 "large_bufsize": 135168, 00:23:59.669 "enable_numa": false 00:23:59.669 } 00:23:59.669 } 00:23:59.669 ] 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "subsystem": "sock", 00:23:59.669 "config": [ 00:23:59.669 { 00:23:59.669 "method": "sock_set_default_impl", 00:23:59.669 "params": { 00:23:59.669 "impl_name": "posix" 00:23:59.669 } 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "method": "sock_impl_set_options", 00:23:59.669 "params": { 00:23:59.669 "impl_name": "ssl", 00:23:59.669 "recv_buf_size": 4096, 00:23:59.669 "send_buf_size": 4096, 00:23:59.669 "enable_recv_pipe": true, 00:23:59.669 "enable_quickack": false, 00:23:59.669 "enable_placement_id": 0, 00:23:59.669 "enable_zerocopy_send_server": true, 00:23:59.669 "enable_zerocopy_send_client": false, 00:23:59.669 "zerocopy_threshold": 0, 00:23:59.669 "tls_version": 0, 00:23:59.669 "enable_ktls": false 00:23:59.669 } 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "method": "sock_impl_set_options", 00:23:59.669 "params": { 00:23:59.669 "impl_name": "posix", 00:23:59.669 "recv_buf_size": 2097152, 00:23:59.669 "send_buf_size": 2097152, 00:23:59.669 "enable_recv_pipe": true, 00:23:59.669 "enable_quickack": false, 00:23:59.669 "enable_placement_id": 0, 00:23:59.669 "enable_zerocopy_send_server": true, 00:23:59.669 "enable_zerocopy_send_client": false, 00:23:59.669 "zerocopy_threshold": 0, 00:23:59.669 "tls_version": 0, 00:23:59.669 "enable_ktls": false 00:23:59.669 } 00:23:59.669 } 00:23:59.669 ] 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "subsystem": "vmd", 00:23:59.669 "config": [] 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "subsystem": "accel", 00:23:59.669 "config": [ 00:23:59.669 { 00:23:59.669 "method": "accel_set_options", 00:23:59.669 "params": { 00:23:59.669 "small_cache_size": 128, 00:23:59.669 "large_cache_size": 16, 00:23:59.669 "task_count": 2048, 00:23:59.669 "sequence_count": 2048, 00:23:59.669 "buf_count": 2048 00:23:59.669 } 00:23:59.669 } 00:23:59.669 ] 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "subsystem": "bdev", 00:23:59.669 "config": [ 00:23:59.669 { 00:23:59.669 "method": "bdev_set_options", 00:23:59.669 "params": { 00:23:59.669 "bdev_io_pool_size": 65535, 00:23:59.669 "bdev_io_cache_size": 256, 00:23:59.669 "bdev_auto_examine": true, 00:23:59.669 "iobuf_small_cache_size": 128, 00:23:59.669 "iobuf_large_cache_size": 16 00:23:59.669 } 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "method": "bdev_raid_set_options", 00:23:59.669 "params": { 00:23:59.669 "process_window_size_kb": 1024, 00:23:59.669 "process_max_bandwidth_mb_sec": 0 00:23:59.669 } 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "method": "bdev_iscsi_set_options", 00:23:59.669 "params": { 00:23:59.669 "timeout_sec": 30 00:23:59.669 } 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "method": "bdev_nvme_set_options", 00:23:59.669 "params": { 00:23:59.669 "action_on_timeout": "none", 00:23:59.669 "timeout_us": 0, 00:23:59.669 "timeout_admin_us": 0, 00:23:59.669 "keep_alive_timeout_ms": 10000, 00:23:59.669 "arbitration_burst": 0, 00:23:59.669 "low_priority_weight": 0, 00:23:59.669 "medium_priority_weight": 0, 00:23:59.669 "high_priority_weight": 0, 00:23:59.669 "nvme_adminq_poll_period_us": 10000, 00:23:59.669 "nvme_ioq_poll_period_us": 0, 00:23:59.669 "io_queue_requests": 0, 00:23:59.669 "delay_cmd_submit": true, 00:23:59.669 "transport_retry_count": 4, 00:23:59.669 "bdev_retry_count": 3, 00:23:59.669 "transport_ack_timeout": 0, 00:23:59.669 "ctrlr_loss_timeout_sec": 0, 00:23:59.669 "reconnect_delay_sec": 0, 00:23:59.669 "fast_io_fail_timeout_sec": 0, 00:23:59.669 "disable_auto_failback": false, 00:23:59.669 "generate_uuids": false, 00:23:59.669 "transport_tos": 0, 00:23:59.669 "nvme_error_stat": false, 00:23:59.669 "rdma_srq_size": 0, 00:23:59.669 "io_path_stat": false, 00:23:59.669 "allow_accel_sequence": false, 00:23:59.669 "rdma_max_cq_size": 0, 00:23:59.669 "rdma_cm_event_timeout_ms": 0, 00:23:59.669 "dhchap_digests": [ 00:23:59.669 "sha256", 00:23:59.669 "sha384", 00:23:59.669 "sha512" 00:23:59.669 ], 00:23:59.669 "dhchap_dhgroups": [ 00:23:59.669 "null", 00:23:59.669 "ffdhe2048", 00:23:59.669 "ffdhe3072", 00:23:59.669 "ffdhe4096", 00:23:59.669 "ffdhe6144", 00:23:59.669 "ffdhe8192" 00:23:59.669 ], 00:23:59.669 "rdma_umr_per_io": false 00:23:59.669 } 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "method": "bdev_nvme_set_hotplug", 00:23:59.669 "params": { 00:23:59.669 "period_us": 100000, 00:23:59.669 "enable": false 00:23:59.669 } 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "method": "bdev_malloc_create", 00:23:59.669 "params": { 00:23:59.669 "name": "malloc0", 00:23:59.669 "num_blocks": 8192, 00:23:59.669 "block_size": 4096, 00:23:59.669 "physical_block_size": 4096, 00:23:59.669 "uuid": "13791d2a-b0f0-4c66-8de1-152cf7e8fca7", 00:23:59.669 "optimal_io_boundary": 0, 00:23:59.669 "md_size": 0, 00:23:59.669 "dif_type": 0, 00:23:59.669 "dif_is_head_of_md": false, 00:23:59.669 "dif_pi_format": 0 00:23:59.669 } 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "method": "bdev_wait_for_examine" 00:23:59.669 } 00:23:59.669 ] 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "subsystem": "nbd", 00:23:59.669 "config": [] 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "subsystem": "scheduler", 00:23:59.669 "config": [ 00:23:59.669 { 00:23:59.669 "method": "framework_set_scheduler", 00:23:59.669 "params": { 00:23:59.669 "name": "static" 00:23:59.669 } 00:23:59.669 } 00:23:59.669 ] 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "subsystem": "nvmf", 00:23:59.669 "config": [ 00:23:59.669 { 00:23:59.669 "method": "nvmf_set_config", 00:23:59.669 "params": { 00:23:59.669 "discovery_filter": "match_any", 00:23:59.669 "admin_cmd_passthru": { 00:23:59.669 "identify_ctrlr": false 00:23:59.669 }, 00:23:59.669 "dhchap_digests": [ 00:23:59.669 "sha256", 00:23:59.669 "sha384", 00:23:59.669 "sha512" 00:23:59.669 ], 00:23:59.669 "dhchap_dhgroups": [ 00:23:59.669 "null", 00:23:59.669 "ffdhe2048", 00:23:59.669 "ffdhe3072", 00:23:59.669 "ffdhe4096", 00:23:59.669 "ffdhe6144", 00:23:59.669 "ffdhe8192" 00:23:59.669 ] 00:23:59.669 } 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "method": "nvmf_set_max_subsystems", 00:23:59.669 "params": { 00:23:59.669 "max_subsystems": 1024 00:23:59.669 } 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "method": "nvmf_set_crdt", 00:23:59.669 "params": { 00:23:59.669 "crdt1": 0, 00:23:59.669 "crdt2": 0, 00:23:59.669 "crdt3": 0 00:23:59.669 } 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "method": "nvmf_create_transport", 00:23:59.669 "params": { 00:23:59.669 "trtype": "TCP", 00:23:59.669 "max_queue_depth": 128, 00:23:59.669 "max_io_qpairs_per_ctrlr": 127, 00:23:59.669 "in_capsule_data_size": 4096, 00:23:59.669 "max_io_size": 131072, 00:23:59.669 "io_unit_size": 131072, 00:23:59.669 "max_aq_depth": 128, 00:23:59.669 "num_shared_buffers": 511, 00:23:59.669 "buf_cache_size": 4294967295, 00:23:59.669 "dif_insert_or_strip": false, 00:23:59.669 "zcopy": false, 00:23:59.669 "c2h_success": false, 00:23:59.669 "sock_priority": 0, 00:23:59.669 "abort_timeout_sec": 1, 00:23:59.669 "ack_timeout": 0, 00:23:59.669 "data_wr_pool_size": 0 00:23:59.669 } 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "method": "nvmf_create_subsystem", 00:23:59.669 "params": { 00:23:59.669 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.669 "allow_any_host": false, 00:23:59.669 "serial_number": "00000000000000000000", 00:23:59.669 "model_number": "SPDK bdev Controller", 00:23:59.669 "max_namespaces": 32, 00:23:59.669 "min_cntlid": 1, 00:23:59.669 "max_cntlid": 65519, 00:23:59.669 "ana_reporting": false 00:23:59.669 } 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "method": "nvmf_subsystem_add_host", 00:23:59.669 "params": { 00:23:59.669 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.669 "host": "nqn.2016-06.io.spdk:host1", 00:23:59.669 "psk": "key0" 00:23:59.669 } 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "method": "nvmf_subsystem_add_ns", 00:23:59.669 "params": { 00:23:59.669 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.669 "namespace": { 00:23:59.669 "nsid": 1, 00:23:59.669 "bdev_name": "malloc0", 00:23:59.669 "nguid": "13791D2AB0F04C668DE1152CF7E8FCA7", 00:23:59.669 "uuid": "13791d2a-b0f0-4c66-8de1-152cf7e8fca7", 00:23:59.669 "no_auto_visible": false 00:23:59.669 } 00:23:59.669 } 00:23:59.669 }, 00:23:59.669 { 00:23:59.669 "method": "nvmf_subsystem_add_listener", 00:23:59.669 "params": { 00:23:59.669 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.669 "listen_address": { 00:23:59.669 "trtype": "TCP", 00:23:59.669 "adrfam": "IPv4", 00:23:59.669 "traddr": "10.0.0.2", 00:23:59.669 "trsvcid": "4420" 00:23:59.669 }, 00:23:59.669 "secure_channel": false, 00:23:59.669 "sock_impl": "ssl" 00:23:59.669 } 00:23:59.669 } 00:23:59.669 ] 00:23:59.669 } 00:23:59.669 ] 00:23:59.669 }' 00:23:59.669 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.669 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=317799 00:23:59.670 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:59.670 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 317799 00:23:59.670 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 317799 ']' 00:23:59.670 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.670 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.670 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.670 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.670 03:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.670 [2024-12-14 03:05:14.779798] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:59.670 [2024-12-14 03:05:14.779848] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.929 [2024-12-14 03:05:14.857019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.929 [2024-12-14 03:05:14.876397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.929 [2024-12-14 03:05:14.876429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.929 [2024-12-14 03:05:14.876436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.929 [2024-12-14 03:05:14.876442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.929 [2024-12-14 03:05:14.876447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.929 [2024-12-14 03:05:14.876948] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.186 [2024-12-14 03:05:15.084731] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.186 [2024-12-14 03:05:15.116768] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:00.186 [2024-12-14 03:05:15.116939] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.754 03:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.754 03:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:00.754 03:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:00.754 03:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:00.754 03:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.754 03:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.754 03:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=317834 00:24:00.754 03:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 317834 /var/tmp/bdevperf.sock 00:24:00.754 03:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 317834 ']' 00:24:00.754 03:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:00.754 03:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:00.754 03:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.754 03:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:00.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:00.754 03:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:00.754 "subsystems": [ 00:24:00.754 { 00:24:00.754 "subsystem": "keyring", 00:24:00.754 "config": [ 00:24:00.754 { 00:24:00.754 "method": "keyring_file_add_key", 00:24:00.754 "params": { 00:24:00.754 "name": "key0", 00:24:00.754 "path": "/tmp/tmp.FFI3omtsdz" 00:24:00.754 } 00:24:00.754 } 00:24:00.754 ] 00:24:00.754 }, 00:24:00.754 { 00:24:00.754 "subsystem": "iobuf", 00:24:00.754 "config": [ 00:24:00.754 { 00:24:00.754 "method": "iobuf_set_options", 00:24:00.754 "params": { 00:24:00.754 "small_pool_count": 8192, 00:24:00.754 "large_pool_count": 1024, 00:24:00.754 "small_bufsize": 8192, 00:24:00.754 "large_bufsize": 135168, 00:24:00.754 "enable_numa": false 00:24:00.754 } 00:24:00.754 } 00:24:00.754 ] 00:24:00.754 }, 00:24:00.754 { 00:24:00.754 "subsystem": "sock", 00:24:00.754 "config": [ 00:24:00.754 { 00:24:00.754 "method": "sock_set_default_impl", 00:24:00.754 "params": { 00:24:00.754 "impl_name": "posix" 00:24:00.754 } 00:24:00.754 }, 00:24:00.754 { 00:24:00.754 "method": "sock_impl_set_options", 00:24:00.754 "params": { 00:24:00.754 "impl_name": "ssl", 00:24:00.754 "recv_buf_size": 4096, 00:24:00.754 "send_buf_size": 4096, 00:24:00.754 "enable_recv_pipe": true, 00:24:00.754 "enable_quickack": false, 00:24:00.754 "enable_placement_id": 0, 00:24:00.754 "enable_zerocopy_send_server": true, 00:24:00.754 "enable_zerocopy_send_client": false, 00:24:00.754 "zerocopy_threshold": 0, 00:24:00.754 "tls_version": 0, 00:24:00.754 "enable_ktls": false 00:24:00.754 } 00:24:00.754 }, 00:24:00.754 { 00:24:00.754 "method": "sock_impl_set_options", 00:24:00.754 "params": { 00:24:00.754 "impl_name": "posix", 00:24:00.754 "recv_buf_size": 2097152, 00:24:00.754 "send_buf_size": 2097152, 00:24:00.754 "enable_recv_pipe": true, 00:24:00.754 "enable_quickack": false, 00:24:00.754 "enable_placement_id": 0, 00:24:00.754 "enable_zerocopy_send_server": true, 00:24:00.754 "enable_zerocopy_send_client": false, 00:24:00.754 "zerocopy_threshold": 0, 00:24:00.754 "tls_version": 0, 00:24:00.754 "enable_ktls": false 00:24:00.754 } 00:24:00.754 } 00:24:00.754 ] 00:24:00.754 }, 00:24:00.754 { 00:24:00.754 "subsystem": "vmd", 00:24:00.754 "config": [] 00:24:00.754 }, 00:24:00.754 { 00:24:00.754 "subsystem": "accel", 00:24:00.754 "config": [ 00:24:00.754 { 00:24:00.754 "method": "accel_set_options", 00:24:00.754 "params": { 00:24:00.754 "small_cache_size": 128, 00:24:00.754 "large_cache_size": 16, 00:24:00.754 "task_count": 2048, 00:24:00.754 "sequence_count": 2048, 00:24:00.754 "buf_count": 2048 00:24:00.754 } 00:24:00.754 } 00:24:00.754 ] 00:24:00.754 }, 00:24:00.754 { 00:24:00.754 "subsystem": "bdev", 00:24:00.754 "config": [ 00:24:00.754 { 00:24:00.754 "method": "bdev_set_options", 00:24:00.754 "params": { 00:24:00.754 "bdev_io_pool_size": 65535, 00:24:00.754 "bdev_io_cache_size": 256, 00:24:00.754 "bdev_auto_examine": true, 00:24:00.754 "iobuf_small_cache_size": 128, 00:24:00.754 "iobuf_large_cache_size": 16 00:24:00.754 } 00:24:00.754 }, 00:24:00.754 { 00:24:00.754 "method": "bdev_raid_set_options", 00:24:00.754 "params": { 00:24:00.754 "process_window_size_kb": 1024, 00:24:00.754 "process_max_bandwidth_mb_sec": 0 00:24:00.754 } 00:24:00.754 }, 00:24:00.754 { 00:24:00.754 "method": "bdev_iscsi_set_options", 00:24:00.754 "params": { 00:24:00.754 "timeout_sec": 30 00:24:00.754 } 00:24:00.754 }, 00:24:00.754 { 00:24:00.754 "method": "bdev_nvme_set_options", 00:24:00.754 "params": { 00:24:00.754 "action_on_timeout": "none", 00:24:00.754 "timeout_us": 0, 00:24:00.754 "timeout_admin_us": 0, 00:24:00.754 "keep_alive_timeout_ms": 10000, 00:24:00.754 "arbitration_burst": 0, 00:24:00.754 "low_priority_weight": 0, 00:24:00.754 "medium_priority_weight": 0, 00:24:00.754 "high_priority_weight": 0, 00:24:00.754 "nvme_adminq_poll_period_us": 10000, 00:24:00.754 "nvme_ioq_poll_period_us": 0, 00:24:00.754 "io_queue_requests": 512, 00:24:00.754 "delay_cmd_submit": true, 00:24:00.754 "transport_retry_count": 4, 00:24:00.754 "bdev_retry_count": 3, 00:24:00.754 "transport_ack_timeout": 0, 00:24:00.754 "ctrlr_loss_timeout_sec": 0, 00:24:00.754 "reconnect_delay_sec": 0, 00:24:00.754 "fast_io_fail_timeout_sec": 0, 00:24:00.754 "disable_auto_failback": false, 00:24:00.754 "generate_uuids": false, 00:24:00.754 "transport_tos": 0, 00:24:00.754 "nvme_error_stat": false, 00:24:00.754 "rdma_srq_size": 0, 00:24:00.754 "io_path_stat": false, 00:24:00.754 "allow_accel_sequence": false, 00:24:00.754 "rdma_max_cq_size": 0, 00:24:00.754 "rdma_cm_event_timeout_ms": 0, 00:24:00.754 "dhchap_digests": [ 00:24:00.754 "sha256", 00:24:00.754 "sha384", 00:24:00.754 "sha512" 00:24:00.754 ], 00:24:00.754 "dhchap_dhgroups": [ 00:24:00.754 "null", 00:24:00.754 "ffdhe2048", 00:24:00.754 "ffdhe3072", 00:24:00.755 "ffdhe4096", 00:24:00.755 "ffdhe6144", 00:24:00.755 "ffdhe8192" 00:24:00.755 ], 00:24:00.755 "rdma_umr_per_io": false 00:24:00.755 } 00:24:00.755 }, 00:24:00.755 { 00:24:00.755 "method": "bdev_nvme_attach_controller", 00:24:00.755 "params": { 00:24:00.755 "name": "nvme0", 00:24:00.755 "trtype": "TCP", 00:24:00.755 "adrfam": "IPv4", 00:24:00.755 "traddr": "10.0.0.2", 00:24:00.755 "trsvcid": "4420", 00:24:00.755 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.755 "prchk_reftag": false, 00:24:00.755 "prchk_guard": false, 00:24:00.755 "ctrlr_loss_timeout_sec": 0, 00:24:00.755 "reconnect_delay_sec": 0, 00:24:00.755 "fast_io_fail_timeout_sec": 0, 00:24:00.755 "psk": "key0", 00:24:00.755 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:00.755 "hdgst": false, 00:24:00.755 "ddgst": false, 00:24:00.755 "multipath": "multipath" 00:24:00.755 } 00:24:00.755 }, 00:24:00.755 { 00:24:00.755 "method": "bdev_nvme_set_hotplug", 00:24:00.755 "params": { 00:24:00.755 "period_us": 100000, 00:24:00.755 "enable": false 00:24:00.755 } 00:24:00.755 }, 00:24:00.755 { 00:24:00.755 "method": "bdev_enable_histogram", 00:24:00.755 "params": { 00:24:00.755 "name": "nvme0n1", 00:24:00.755 "enable": true 00:24:00.755 } 00:24:00.755 }, 00:24:00.755 { 00:24:00.755 "method": "bdev_wait_for_examine" 00:24:00.755 } 00:24:00.755 ] 00:24:00.755 }, 00:24:00.755 { 00:24:00.755 "subsystem": "nbd", 00:24:00.755 "config": [] 00:24:00.755 } 00:24:00.755 ] 00:24:00.755 }' 00:24:00.755 03:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.755 03:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.755 [2024-12-14 03:05:15.686918] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:00.755 [2024-12-14 03:05:15.686967] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317834 ] 00:24:00.755 [2024-12-14 03:05:15.759909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.755 [2024-12-14 03:05:15.782442] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.014 [2024-12-14 03:05:15.929991] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:01.580 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.580 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:01.580 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:01.580 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:01.839 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.839 03:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:01.839 Running I/O for 1 seconds... 00:24:02.775 5776.00 IOPS, 22.56 MiB/s 00:24:02.775 Latency(us) 00:24:02.775 [2024-12-14T02:05:17.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.775 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:02.775 Verification LBA range: start 0x0 length 0x2000 00:24:02.775 nvme0n1 : 1.02 5785.95 22.60 0.00 0.00 21918.81 5773.41 26339.23 00:24:02.775 [2024-12-14T02:05:17.908Z] =================================================================================================================== 00:24:02.775 [2024-12-14T02:05:17.908Z] Total : 5785.95 22.60 0.00 0.00 21918.81 5773.41 26339.23 00:24:02.775 { 00:24:02.775 "results": [ 00:24:02.775 { 00:24:02.775 "job": "nvme0n1", 00:24:02.775 "core_mask": "0x2", 00:24:02.775 "workload": "verify", 00:24:02.775 "status": "finished", 00:24:02.775 "verify_range": { 00:24:02.775 "start": 0, 00:24:02.775 "length": 8192 00:24:02.775 }, 00:24:02.775 "queue_depth": 128, 00:24:02.775 "io_size": 4096, 00:24:02.775 "runtime": 1.020403, 00:24:02.775 "iops": 5785.949276903341, 00:24:02.775 "mibps": 22.601364362903677, 00:24:02.775 "io_failed": 0, 00:24:02.775 "io_timeout": 0, 00:24:02.775 "avg_latency_us": 21918.809198606272, 00:24:02.775 "min_latency_us": 5773.409523809524, 00:24:02.775 "max_latency_us": 26339.230476190478 00:24:02.775 } 00:24:02.775 ], 00:24:02.775 "core_count": 1 00:24:02.775 } 00:24:02.775 03:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:02.775 03:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:02.775 03:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:02.775 03:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:02.775 03:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:02.775 03:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:02.775 03:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:02.775 03:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:02.775 03:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:02.775 03:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:02.775 03:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:02.775 nvmf_trace.0 00:24:03.035 03:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:03.035 03:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 317834 00:24:03.035 03:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 317834 ']' 00:24:03.035 03:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 317834 00:24:03.035 03:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:03.035 03:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.035 03:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317834 00:24:03.035 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:03.035 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:03.035 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317834' 00:24:03.035 killing process with pid 317834 00:24:03.035 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 317834 00:24:03.035 Received shutdown signal, test time was about 1.000000 seconds 00:24:03.035 00:24:03.035 Latency(us) 00:24:03.035 [2024-12-14T02:05:18.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.035 [2024-12-14T02:05:18.168Z] =================================================================================================================== 00:24:03.035 [2024-12-14T02:05:18.168Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:03.035 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 317834 00:24:03.035 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:03.035 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:03.035 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:03.035 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:03.035 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:03.035 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:03.035 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:03.035 rmmod nvme_tcp 00:24:03.294 rmmod nvme_fabrics 00:24:03.294 rmmod nvme_keyring 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 317799 ']' 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 317799 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 317799 ']' 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 317799 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317799 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317799' 00:24:03.294 killing process with pid 317799 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 317799 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 317799 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.294 03:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.830 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:05.830 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.72Y4OdGFd7 /tmp/tmp.83BVMzZ21Y /tmp/tmp.FFI3omtsdz 00:24:05.830 00:24:05.830 real 1m18.544s 00:24:05.830 user 2m1.492s 00:24:05.830 sys 0m29.134s 00:24:05.830 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:05.830 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:05.830 ************************************ 00:24:05.830 END TEST nvmf_tls 00:24:05.830 ************************************ 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:05.831 ************************************ 00:24:05.831 START TEST nvmf_fips 00:24:05.831 ************************************ 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:05.831 * Looking for test storage... 00:24:05.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:05.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.831 --rc genhtml_branch_coverage=1 00:24:05.831 --rc genhtml_function_coverage=1 00:24:05.831 --rc genhtml_legend=1 00:24:05.831 --rc geninfo_all_blocks=1 00:24:05.831 --rc geninfo_unexecuted_blocks=1 00:24:05.831 00:24:05.831 ' 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:05.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.831 --rc genhtml_branch_coverage=1 00:24:05.831 --rc genhtml_function_coverage=1 00:24:05.831 --rc genhtml_legend=1 00:24:05.831 --rc geninfo_all_blocks=1 00:24:05.831 --rc geninfo_unexecuted_blocks=1 00:24:05.831 00:24:05.831 ' 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:05.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.831 --rc genhtml_branch_coverage=1 00:24:05.831 --rc genhtml_function_coverage=1 00:24:05.831 --rc genhtml_legend=1 00:24:05.831 --rc geninfo_all_blocks=1 00:24:05.831 --rc geninfo_unexecuted_blocks=1 00:24:05.831 00:24:05.831 ' 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:05.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.831 --rc genhtml_branch_coverage=1 00:24:05.831 --rc genhtml_function_coverage=1 00:24:05.831 --rc genhtml_legend=1 00:24:05.831 --rc geninfo_all_blocks=1 00:24:05.831 --rc geninfo_unexecuted_blocks=1 00:24:05.831 00:24:05.831 ' 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:05.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:05.831 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:05.832 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:06.091 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.091 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:06.091 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.091 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:06.091 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:06.091 03:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:06.091 Error setting digest 00:24:06.091 40A23B6B997F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:06.091 40A23B6B997F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:06.091 03:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:06.091 03:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:06.091 03:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:06.091 03:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:06.091 03:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:06.091 03:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:06.091 03:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.091 03:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:06.091 03:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:06.091 03:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:06.091 03:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.091 03:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.091 03:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.091 03:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:06.091 03:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:06.091 03:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:06.091 03:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:12.659 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:12.659 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:12.659 Found net devices under 0000:af:00.0: cvl_0_0 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:12.659 Found net devices under 0000:af:00.1: cvl_0_1 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:12.659 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:12.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:12.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:24:12.660 00:24:12.660 --- 10.0.0.2 ping statistics --- 00:24:12.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.660 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:12.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:12.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:24:12.660 00:24:12.660 --- 10.0.0.1 ping statistics --- 00:24:12.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.660 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=320134 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 320134 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 320134 ']' 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.660 03:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:12.660 [2024-12-14 03:05:26.932906] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:12.660 [2024-12-14 03:05:26.932954] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.660 [2024-12-14 03:05:27.009671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.660 [2024-12-14 03:05:27.029654] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.660 [2024-12-14 03:05:27.029688] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.660 [2024-12-14 03:05:27.029696] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.660 [2024-12-14 03:05:27.029703] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.660 [2024-12-14 03:05:27.029707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.660 [2024-12-14 03:05:27.030166] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.aBl 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.aBl 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.aBl 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.aBl 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:12.660 [2024-12-14 03:05:27.340844] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.660 [2024-12-14 03:05:27.356839] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:12.660 [2024-12-14 03:05:27.357019] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.660 malloc0 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=320168 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 320168 /var/tmp/bdevperf.sock 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 320168 ']' 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:12.660 [2024-12-14 03:05:27.488662] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:12.660 [2024-12-14 03:05:27.488713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid320168 ] 00:24:12.660 [2024-12-14 03:05:27.561366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.660 [2024-12-14 03:05:27.583323] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:12.660 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.aBl 00:24:12.919 03:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:12.919 [2024-12-14 03:05:28.026193] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:13.177 TLSTESTn1 00:24:13.177 03:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:13.177 Running I/O for 10 seconds... 00:24:15.489 5444.00 IOPS, 21.27 MiB/s [2024-12-14T02:05:31.559Z] 5198.50 IOPS, 20.31 MiB/s [2024-12-14T02:05:32.495Z] 5267.33 IOPS, 20.58 MiB/s [2024-12-14T02:05:33.430Z] 5259.75 IOPS, 20.55 MiB/s [2024-12-14T02:05:34.366Z] 5259.80 IOPS, 20.55 MiB/s [2024-12-14T02:05:35.303Z] 5258.00 IOPS, 20.54 MiB/s [2024-12-14T02:05:36.239Z] 5253.29 IOPS, 20.52 MiB/s [2024-12-14T02:05:37.615Z] 5299.50 IOPS, 20.70 MiB/s [2024-12-14T02:05:38.551Z] 5285.56 IOPS, 20.65 MiB/s [2024-12-14T02:05:38.551Z] 5301.40 IOPS, 20.71 MiB/s 00:24:23.418 Latency(us) 00:24:23.418 [2024-12-14T02:05:38.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.418 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:23.418 Verification LBA range: start 0x0 length 0x2000 00:24:23.418 TLSTESTn1 : 10.02 5305.27 20.72 0.00 0.00 24090.76 6054.28 31706.94 00:24:23.418 [2024-12-14T02:05:38.551Z] =================================================================================================================== 00:24:23.418 [2024-12-14T02:05:38.551Z] Total : 5305.27 20.72 0.00 0.00 24090.76 6054.28 31706.94 00:24:23.418 { 00:24:23.418 "results": [ 00:24:23.418 { 00:24:23.418 "job": "TLSTESTn1", 00:24:23.418 "core_mask": "0x4", 00:24:23.418 "workload": "verify", 00:24:23.418 "status": "finished", 00:24:23.418 "verify_range": { 00:24:23.418 "start": 0, 00:24:23.418 "length": 8192 00:24:23.418 }, 00:24:23.418 "queue_depth": 128, 00:24:23.418 "io_size": 4096, 00:24:23.418 "runtime": 10.01683, 00:24:23.418 "iops": 5305.271228522397, 00:24:23.418 "mibps": 20.723715736415613, 00:24:23.418 "io_failed": 0, 00:24:23.418 "io_timeout": 0, 00:24:23.418 "avg_latency_us": 24090.755055762547, 00:24:23.418 "min_latency_us": 6054.278095238095, 00:24:23.418 "max_latency_us": 31706.94095238095 00:24:23.418 } 00:24:23.418 ], 00:24:23.418 "core_count": 1 00:24:23.418 } 00:24:23.418 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:23.418 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:23.418 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:23.418 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:23.418 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:23.418 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:23.418 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:23.418 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:23.418 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:23.418 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:23.418 nvmf_trace.0 00:24:23.418 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:23.418 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 320168 00:24:23.418 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 320168 ']' 00:24:23.418 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 320168 00:24:23.418 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:23.419 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.419 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 320168 00:24:23.419 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:23.419 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:23.419 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 320168' 00:24:23.419 killing process with pid 320168 00:24:23.419 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 320168 00:24:23.419 Received shutdown signal, test time was about 10.000000 seconds 00:24:23.419 00:24:23.419 Latency(us) 00:24:23.419 [2024-12-14T02:05:38.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.419 [2024-12-14T02:05:38.552Z] =================================================================================================================== 00:24:23.419 [2024-12-14T02:05:38.552Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:23.419 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 320168 00:24:23.678 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:23.678 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:23.678 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:23.678 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:23.678 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:23.678 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:23.678 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:23.678 rmmod nvme_tcp 00:24:23.678 rmmod nvme_fabrics 00:24:23.678 rmmod nvme_keyring 00:24:23.678 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:23.678 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:23.678 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:23.678 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 320134 ']' 00:24:23.678 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 320134 00:24:23.678 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 320134 ']' 00:24:23.678 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 320134 00:24:23.678 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:23.678 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.678 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 320134 00:24:23.678 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:23.678 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:23.678 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 320134' 00:24:23.678 killing process with pid 320134 00:24:23.678 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 320134 00:24:23.678 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 320134 00:24:23.937 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:23.937 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:23.937 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:23.937 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:23.937 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:23.937 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:23.937 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:23.937 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:23.937 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:23.937 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.937 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.937 03:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.842 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:25.842 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.aBl 00:24:25.842 00:24:25.842 real 0m20.344s 00:24:25.842 user 0m21.463s 00:24:25.842 sys 0m9.249s 00:24:25.842 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:25.842 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:25.842 ************************************ 00:24:25.842 END TEST nvmf_fips 00:24:25.842 ************************************ 00:24:25.842 03:05:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:25.842 03:05:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:25.842 03:05:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:25.842 03:05:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:26.102 ************************************ 00:24:26.102 START TEST nvmf_control_msg_list 00:24:26.102 ************************************ 00:24:26.102 03:05:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:26.102 * Looking for test storage... 00:24:26.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:26.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.102 --rc genhtml_branch_coverage=1 00:24:26.102 --rc genhtml_function_coverage=1 00:24:26.102 --rc genhtml_legend=1 00:24:26.102 --rc geninfo_all_blocks=1 00:24:26.102 --rc geninfo_unexecuted_blocks=1 00:24:26.102 00:24:26.102 ' 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:26.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.102 --rc genhtml_branch_coverage=1 00:24:26.102 --rc genhtml_function_coverage=1 00:24:26.102 --rc genhtml_legend=1 00:24:26.102 --rc geninfo_all_blocks=1 00:24:26.102 --rc geninfo_unexecuted_blocks=1 00:24:26.102 00:24:26.102 ' 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:26.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.102 --rc genhtml_branch_coverage=1 00:24:26.102 --rc genhtml_function_coverage=1 00:24:26.102 --rc genhtml_legend=1 00:24:26.102 --rc geninfo_all_blocks=1 00:24:26.102 --rc geninfo_unexecuted_blocks=1 00:24:26.102 00:24:26.102 ' 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:26.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.102 --rc genhtml_branch_coverage=1 00:24:26.102 --rc genhtml_function_coverage=1 00:24:26.102 --rc genhtml_legend=1 00:24:26.102 --rc geninfo_all_blocks=1 00:24:26.102 --rc geninfo_unexecuted_blocks=1 00:24:26.102 00:24:26.102 ' 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.102 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:26.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:26.103 03:05:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:32.672 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:32.673 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:32.673 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:32.673 Found net devices under 0000:af:00.0: cvl_0_0 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:32.673 Found net devices under 0000:af:00.1: cvl_0_1 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:32.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:24:32.673 00:24:32.673 --- 10.0.0.2 ping statistics --- 00:24:32.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.673 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:32.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:24:32.673 00:24:32.673 --- 10.0.0.1 ping statistics --- 00:24:32.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.673 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=322547 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 322547 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 322547 ']' 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.673 03:05:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:32.673 [2024-12-14 03:05:47.034985] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:32.673 [2024-12-14 03:05:47.035032] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.673 [2024-12-14 03:05:47.112742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.673 [2024-12-14 03:05:47.134605] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.673 [2024-12-14 03:05:47.134646] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.673 [2024-12-14 03:05:47.134653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.674 [2024-12-14 03:05:47.134659] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.674 [2024-12-14 03:05:47.134664] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.674 [2024-12-14 03:05:47.135137] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:32.674 [2024-12-14 03:05:47.266429] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:32.674 Malloc0 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:32.674 [2024-12-14 03:05:47.306511] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=322574 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=322575 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=322576 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 322574 00:24:32.674 03:05:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:32.674 [2024-12-14 03:05:47.395318] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:32.674 [2024-12-14 03:05:47.395509] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:32.674 [2024-12-14 03:05:47.395665] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:33.610 Initializing NVMe Controllers 00:24:33.610 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:33.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:33.610 Initialization complete. Launching workers. 00:24:33.610 ======================================================== 00:24:33.610 Latency(us) 00:24:33.610 Device Information : IOPS MiB/s Average min max 00:24:33.610 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 5155.00 20.14 193.64 121.53 480.34 00:24:33.610 ======================================================== 00:24:33.610 Total : 5155.00 20.14 193.64 121.53 480.34 00:24:33.610 00:24:33.610 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 322575 00:24:33.610 Initializing NVMe Controllers 00:24:33.610 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:33.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:33.610 Initialization complete. Launching workers. 00:24:33.610 ======================================================== 00:24:33.610 Latency(us) 00:24:33.610 Device Information : IOPS MiB/s Average min max 00:24:33.610 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 5299.00 20.70 188.35 129.60 379.84 00:24:33.610 ======================================================== 00:24:33.610 Total : 5299.00 20.70 188.35 129.60 379.84 00:24:33.610 00:24:33.610 Initializing NVMe Controllers 00:24:33.610 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:33.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:33.610 Initialization complete. Launching workers. 00:24:33.610 ======================================================== 00:24:33.611 Latency(us) 00:24:33.611 Device Information : IOPS MiB/s Average min max 00:24:33.611 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 5295.00 20.68 188.50 129.17 377.69 00:24:33.611 ======================================================== 00:24:33.611 Total : 5295.00 20.68 188.50 129.17 377.69 00:24:33.611 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 322576 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:33.611 rmmod nvme_tcp 00:24:33.611 rmmod nvme_fabrics 00:24:33.611 rmmod nvme_keyring 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 322547 ']' 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 322547 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 322547 ']' 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 322547 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 322547 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 322547' 00:24:33.611 killing process with pid 322547 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 322547 00:24:33.611 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 322547 00:24:33.870 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:33.870 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:33.870 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:33.870 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:33.870 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:33.870 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:33.870 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:33.870 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:33.870 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:33.870 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.870 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:33.870 03:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.405 03:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:36.405 00:24:36.405 real 0m9.964s 00:24:36.405 user 0m6.537s 00:24:36.405 sys 0m5.397s 00:24:36.405 03:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:36.405 03:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:36.405 ************************************ 00:24:36.405 END TEST nvmf_control_msg_list 00:24:36.405 ************************************ 00:24:36.405 03:05:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:36.405 03:05:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:36.405 03:05:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:36.405 03:05:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:36.405 ************************************ 00:24:36.405 START TEST nvmf_wait_for_buf 00:24:36.405 ************************************ 00:24:36.405 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:36.405 * Looking for test storage... 00:24:36.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:36.405 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:36.405 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:36.405 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:36.405 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:36.405 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:36.405 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:36.405 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:36.405 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:36.405 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:36.405 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:36.405 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:36.405 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:36.405 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:36.405 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:36.405 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:36.405 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:36.405 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:36.405 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:36.405 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:36.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.406 --rc genhtml_branch_coverage=1 00:24:36.406 --rc genhtml_function_coverage=1 00:24:36.406 --rc genhtml_legend=1 00:24:36.406 --rc geninfo_all_blocks=1 00:24:36.406 --rc geninfo_unexecuted_blocks=1 00:24:36.406 00:24:36.406 ' 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:36.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.406 --rc genhtml_branch_coverage=1 00:24:36.406 --rc genhtml_function_coverage=1 00:24:36.406 --rc genhtml_legend=1 00:24:36.406 --rc geninfo_all_blocks=1 00:24:36.406 --rc geninfo_unexecuted_blocks=1 00:24:36.406 00:24:36.406 ' 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:36.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.406 --rc genhtml_branch_coverage=1 00:24:36.406 --rc genhtml_function_coverage=1 00:24:36.406 --rc genhtml_legend=1 00:24:36.406 --rc geninfo_all_blocks=1 00:24:36.406 --rc geninfo_unexecuted_blocks=1 00:24:36.406 00:24:36.406 ' 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:36.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.406 --rc genhtml_branch_coverage=1 00:24:36.406 --rc genhtml_function_coverage=1 00:24:36.406 --rc genhtml_legend=1 00:24:36.406 --rc geninfo_all_blocks=1 00:24:36.406 --rc geninfo_unexecuted_blocks=1 00:24:36.406 00:24:36.406 ' 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:36.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:36.406 03:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:41.680 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.680 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:41.681 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:41.681 Found net devices under 0000:af:00.0: cvl_0_0 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:41.681 Found net devices under 0000:af:00.1: cvl_0_1 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:41.681 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:41.940 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:41.940 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:41.940 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:41.940 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:41.940 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:41.940 03:05:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:41.940 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:41.940 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:41.940 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:41.940 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:41.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:41.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:24:41.940 00:24:41.940 --- 10.0.0.2 ping statistics --- 00:24:41.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.940 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:24:41.940 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:41.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:41.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:24:41.940 00:24:41.940 --- 10.0.0.1 ping statistics --- 00:24:41.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.940 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:24:41.940 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:41.940 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:41.940 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:41.940 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:41.940 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:41.940 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:41.940 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:41.940 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:41.940 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:42.199 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:42.199 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:42.199 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:42.199 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.199 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=324825 00:24:42.199 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 324825 00:24:42.199 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:42.199 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 324825 ']' 00:24:42.199 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.199 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:42.199 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.199 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:42.199 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.199 [2024-12-14 03:05:57.150539] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:42.199 [2024-12-14 03:05:57.150589] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.199 [2024-12-14 03:05:57.228845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.199 [2024-12-14 03:05:57.250882] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.199 [2024-12-14 03:05:57.250918] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.199 [2024-12-14 03:05:57.250926] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.199 [2024-12-14 03:05:57.250931] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.199 [2024-12-14 03:05:57.250937] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.200 [2024-12-14 03:05:57.251423] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.200 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:42.200 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:42.200 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:42.200 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:42.200 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.459 Malloc0 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.459 [2024-12-14 03:05:57.452752] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.459 [2024-12-14 03:05:57.480939] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.459 03:05:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:42.459 [2024-12-14 03:05:57.564404] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:44.364 Initializing NVMe Controllers 00:24:44.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:44.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:44.364 Initialization complete. Launching workers. 00:24:44.364 ======================================================== 00:24:44.364 Latency(us) 00:24:44.364 Device Information : IOPS MiB/s Average min max 00:24:44.364 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 99.00 12.38 42249.35 31899.31 111731.57 00:24:44.364 ======================================================== 00:24:44.364 Total : 99.00 12.38 42249.35 31899.31 111731.57 00:24:44.364 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1558 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1558 -eq 0 ]] 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:44.364 rmmod nvme_tcp 00:24:44.364 rmmod nvme_fabrics 00:24:44.364 rmmod nvme_keyring 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 324825 ']' 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 324825 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 324825 ']' 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 324825 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 324825 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 324825' 00:24:44.364 killing process with pid 324825 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 324825 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 324825 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:44.364 03:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:46.900 00:24:46.900 real 0m10.458s 00:24:46.900 user 0m4.103s 00:24:46.900 sys 0m4.799s 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:46.900 ************************************ 00:24:46.900 END TEST nvmf_wait_for_buf 00:24:46.900 ************************************ 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:46.900 ************************************ 00:24:46.900 START TEST nvmf_fuzz 00:24:46.900 ************************************ 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:46.900 * Looking for test storage... 00:24:46.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:46.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.900 --rc genhtml_branch_coverage=1 00:24:46.900 --rc genhtml_function_coverage=1 00:24:46.900 --rc genhtml_legend=1 00:24:46.900 --rc geninfo_all_blocks=1 00:24:46.900 --rc geninfo_unexecuted_blocks=1 00:24:46.900 00:24:46.900 ' 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:46.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.900 --rc genhtml_branch_coverage=1 00:24:46.900 --rc genhtml_function_coverage=1 00:24:46.900 --rc genhtml_legend=1 00:24:46.900 --rc geninfo_all_blocks=1 00:24:46.900 --rc geninfo_unexecuted_blocks=1 00:24:46.900 00:24:46.900 ' 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:46.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.900 --rc genhtml_branch_coverage=1 00:24:46.900 --rc genhtml_function_coverage=1 00:24:46.900 --rc genhtml_legend=1 00:24:46.900 --rc geninfo_all_blocks=1 00:24:46.900 --rc geninfo_unexecuted_blocks=1 00:24:46.900 00:24:46.900 ' 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:46.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.900 --rc genhtml_branch_coverage=1 00:24:46.900 --rc genhtml_function_coverage=1 00:24:46.900 --rc genhtml_legend=1 00:24:46.900 --rc geninfo_all_blocks=1 00:24:46.900 --rc geninfo_unexecuted_blocks=1 00:24:46.900 00:24:46.900 ' 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.900 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:46.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:46.901 03:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:53.485 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:53.485 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:53.485 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:53.486 Found net devices under 0000:af:00.0: cvl_0_0 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:53.486 Found net devices under 0000:af:00.1: cvl_0_1 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:53.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:53.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:24:53.486 00:24:53.486 --- 10.0.0.2 ping statistics --- 00:24:53.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.486 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:53.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:53.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:24:53.486 00:24:53.486 --- 10.0.0.1 ping statistics --- 00:24:53.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.486 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:53.486 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=327588 00:24:53.487 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:53.487 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:53.487 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 327588 00:24:53.487 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 327588 ']' 00:24:53.487 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.487 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:53.487 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.487 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:53.487 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:53.487 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:53.487 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:53.487 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:53.487 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.487 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:53.487 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.487 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:53.488 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.488 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:53.488 Malloc0 00:24:53.488 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.488 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:53.488 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.488 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:53.488 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.488 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:53.488 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.488 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:53.488 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.488 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:53.488 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.488 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:53.488 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.488 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:53.488 03:06:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:25.567 Fuzzing completed. Shutting down the fuzz application 00:25:25.567 00:25:25.567 Dumping successful admin opcodes: 00:25:25.567 9, 10, 00:25:25.567 Dumping successful io opcodes: 00:25:25.567 0, 9, 00:25:25.567 NS: 0x2000008eff00 I/O qp, Total commands completed: 902707, total successful commands: 5261, random_seed: 3783713024 00:25:25.567 NS: 0x2000008eff00 admin qp, Total commands completed: 93536, total successful commands: 22, random_seed: 3669418880 00:25:25.567 03:06:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:25.567 Fuzzing completed. Shutting down the fuzz application 00:25:25.567 00:25:25.567 Dumping successful admin opcodes: 00:25:25.567 00:25:25.567 Dumping successful io opcodes: 00:25:25.567 00:25:25.567 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3861059974 00:25:25.567 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 3861120886 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:25.567 rmmod nvme_tcp 00:25:25.567 rmmod nvme_fabrics 00:25:25.567 rmmod nvme_keyring 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 327588 ']' 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 327588 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 327588 ']' 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 327588 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 327588 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 327588' 00:25:25.567 killing process with pid 327588 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 327588 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 327588 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:25.567 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:25.568 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:25.568 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:25.568 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:25.568 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:25.568 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:25.568 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:25.568 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:25.568 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.568 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:25.568 03:06:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.944 03:06:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:26.944 03:06:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:26.944 00:25:26.944 real 0m40.305s 00:25:26.944 user 0m52.304s 00:25:26.944 sys 0m17.052s 00:25:26.944 03:06:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:26.944 03:06:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:26.944 ************************************ 00:25:26.944 END TEST nvmf_fuzz 00:25:26.944 ************************************ 00:25:26.944 03:06:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:26.944 03:06:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:26.944 03:06:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:26.944 03:06:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:26.944 ************************************ 00:25:26.944 START TEST nvmf_multiconnection 00:25:26.944 ************************************ 00:25:26.944 03:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:26.944 * Looking for test storage... 00:25:26.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:26.944 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:26.944 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:25:26.944 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:27.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.204 --rc genhtml_branch_coverage=1 00:25:27.204 --rc genhtml_function_coverage=1 00:25:27.204 --rc genhtml_legend=1 00:25:27.204 --rc geninfo_all_blocks=1 00:25:27.204 --rc geninfo_unexecuted_blocks=1 00:25:27.204 00:25:27.204 ' 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:27.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.204 --rc genhtml_branch_coverage=1 00:25:27.204 --rc genhtml_function_coverage=1 00:25:27.204 --rc genhtml_legend=1 00:25:27.204 --rc geninfo_all_blocks=1 00:25:27.204 --rc geninfo_unexecuted_blocks=1 00:25:27.204 00:25:27.204 ' 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:27.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.204 --rc genhtml_branch_coverage=1 00:25:27.204 --rc genhtml_function_coverage=1 00:25:27.204 --rc genhtml_legend=1 00:25:27.204 --rc geninfo_all_blocks=1 00:25:27.204 --rc geninfo_unexecuted_blocks=1 00:25:27.204 00:25:27.204 ' 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:27.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.204 --rc genhtml_branch_coverage=1 00:25:27.204 --rc genhtml_function_coverage=1 00:25:27.204 --rc genhtml_legend=1 00:25:27.204 --rc geninfo_all_blocks=1 00:25:27.204 --rc geninfo_unexecuted_blocks=1 00:25:27.204 00:25:27.204 ' 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:27.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:27.204 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:27.205 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:27.205 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:27.205 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:27.205 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:27.205 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:27.205 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:27.205 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:27.205 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:27.205 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.205 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:27.205 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.205 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:27.205 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:27.205 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:27.205 03:06:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:33.775 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:33.775 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:33.775 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:33.776 Found net devices under 0000:af:00.0: cvl_0_0 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:33.776 Found net devices under 0000:af:00.1: cvl_0_1 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:33.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:25:33.776 00:25:33.776 --- 10.0.0.2 ping statistics --- 00:25:33.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.776 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:33.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:25:33.776 00:25:33.776 --- 10.0.0.1 ping statistics --- 00:25:33.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.776 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=330250 00:25:33.776 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 330250 00:25:33.776 03:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:33.776 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 330250 ']' 00:25:33.776 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.776 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:33.776 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.776 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:33.776 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.776 [2024-12-14 03:06:48.049153] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:25:33.776 [2024-12-14 03:06:48.049202] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:33.776 [2024-12-14 03:06:48.126682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:33.776 [2024-12-14 03:06:48.151249] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:33.776 [2024-12-14 03:06:48.151288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:33.776 [2024-12-14 03:06:48.151295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:33.776 [2024-12-14 03:06:48.151301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:33.776 [2024-12-14 03:06:48.151306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:33.776 [2024-12-14 03:06:48.152700] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.776 [2024-12-14 03:06:48.152806] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:33.776 [2024-12-14 03:06:48.152891] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.776 [2024-12-14 03:06:48.152892] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:25:33.776 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.776 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:33.776 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:33.776 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:33.776 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.776 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.776 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:33.776 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.776 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 [2024-12-14 03:06:48.280483] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 Malloc1 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 [2024-12-14 03:06:48.347943] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 Malloc2 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 Malloc3 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 Malloc4 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 Malloc5 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 Malloc6 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.778 Malloc7 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.778 Malloc8 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.778 Malloc9 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.778 Malloc10 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.778 Malloc11 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.778 03:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:35.154 03:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:35.154 03:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:35.154 03:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:35.154 03:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:35.154 03:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:37.056 03:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:37.056 03:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:37.056 03:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:37.056 03:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:37.056 03:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:37.056 03:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:37.056 03:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.056 03:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:38.432 03:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:38.432 03:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:38.432 03:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:38.432 03:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:38.432 03:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:40.331 03:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:40.331 03:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:40.331 03:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:40.331 03:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:40.331 03:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:40.331 03:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:40.331 03:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:40.331 03:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:41.266 03:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:41.266 03:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:41.266 03:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:41.266 03:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:41.266 03:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:43.797 03:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:43.797 03:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:43.797 03:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:43.797 03:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:43.797 03:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:43.797 03:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:43.797 03:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.797 03:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:44.733 03:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:44.733 03:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:44.733 03:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:44.733 03:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:44.733 03:06:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:46.635 03:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:46.635 03:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:46.635 03:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:46.635 03:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:46.635 03:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:46.635 03:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:46.635 03:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:46.635 03:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:48.012 03:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:48.012 03:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:48.012 03:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:48.012 03:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:48.012 03:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:49.915 03:07:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:49.915 03:07:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:49.915 03:07:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:49.915 03:07:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:49.915 03:07:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:49.915 03:07:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:49.915 03:07:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.915 03:07:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:51.293 03:07:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:51.293 03:07:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:51.294 03:07:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:51.294 03:07:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:51.294 03:07:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:53.198 03:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:53.198 03:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:53.198 03:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:53.457 03:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:53.457 03:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:53.457 03:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:53.457 03:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.457 03:07:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:54.835 03:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:54.835 03:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:54.835 03:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:54.835 03:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:54.835 03:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:56.740 03:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:56.740 03:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:56.740 03:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:56.740 03:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:56.740 03:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:56.740 03:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:56.740 03:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:56.740 03:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:58.117 03:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:58.117 03:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:58.117 03:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:58.117 03:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:58.117 03:07:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:00.020 03:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:00.020 03:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:00.020 03:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:26:00.020 03:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:00.020 03:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:00.020 03:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:00.020 03:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.020 03:07:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:01.395 03:07:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:01.395 03:07:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:01.395 03:07:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:01.395 03:07:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:01.395 03:07:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:03.927 03:07:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:03.927 03:07:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:03.927 03:07:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:26:03.927 03:07:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:03.927 03:07:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:03.927 03:07:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:03.927 03:07:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.927 03:07:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:04.864 03:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:04.864 03:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:04.864 03:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:04.864 03:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:04.864 03:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:06.767 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:06.767 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:06.767 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:26:06.767 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:06.767 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:06.767 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:06.767 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.767 03:07:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:08.671 03:07:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:08.671 03:07:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:08.671 03:07:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:08.671 03:07:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:08.671 03:07:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:10.577 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:10.577 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:10.577 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:10.577 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:10.577 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:10.577 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:10.577 03:07:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:10.577 [global] 00:26:10.577 thread=1 00:26:10.577 invalidate=1 00:26:10.577 rw=read 00:26:10.577 time_based=1 00:26:10.577 runtime=10 00:26:10.577 ioengine=libaio 00:26:10.577 direct=1 00:26:10.577 bs=262144 00:26:10.577 iodepth=64 00:26:10.577 norandommap=1 00:26:10.577 numjobs=1 00:26:10.577 00:26:10.577 [job0] 00:26:10.577 filename=/dev/nvme0n1 00:26:10.577 [job1] 00:26:10.577 filename=/dev/nvme10n1 00:26:10.577 [job2] 00:26:10.577 filename=/dev/nvme1n1 00:26:10.577 [job3] 00:26:10.577 filename=/dev/nvme2n1 00:26:10.577 [job4] 00:26:10.577 filename=/dev/nvme3n1 00:26:10.577 [job5] 00:26:10.577 filename=/dev/nvme4n1 00:26:10.577 [job6] 00:26:10.577 filename=/dev/nvme5n1 00:26:10.577 [job7] 00:26:10.577 filename=/dev/nvme6n1 00:26:10.577 [job8] 00:26:10.577 filename=/dev/nvme7n1 00:26:10.577 [job9] 00:26:10.577 filename=/dev/nvme8n1 00:26:10.577 [job10] 00:26:10.577 filename=/dev/nvme9n1 00:26:10.577 Could not set queue depth (nvme0n1) 00:26:10.577 Could not set queue depth (nvme10n1) 00:26:10.577 Could not set queue depth (nvme1n1) 00:26:10.577 Could not set queue depth (nvme2n1) 00:26:10.577 Could not set queue depth (nvme3n1) 00:26:10.577 Could not set queue depth (nvme4n1) 00:26:10.577 Could not set queue depth (nvme5n1) 00:26:10.577 Could not set queue depth (nvme6n1) 00:26:10.577 Could not set queue depth (nvme7n1) 00:26:10.577 Could not set queue depth (nvme8n1) 00:26:10.577 Could not set queue depth (nvme9n1) 00:26:10.836 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.836 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.836 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.836 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.836 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.836 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.836 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.836 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.836 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.836 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.836 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.836 fio-3.35 00:26:10.836 Starting 11 threads 00:26:23.041 00:26:23.041 job0: (groupid=0, jobs=1): err= 0: pid=330986: Sat Dec 14 03:07:36 2024 00:26:23.041 read: IOPS=331, BW=82.9MiB/s (87.0MB/s)(840MiB/10127msec) 00:26:23.041 slat (usec): min=8, max=191923, avg=2908.24, stdev=13264.68 00:26:23.041 clat (msec): min=18, max=879, avg=189.73, stdev=205.64 00:26:23.041 lat (msec): min=18, max=879, avg=192.64, stdev=208.51 00:26:23.041 clat percentiles (msec): 00:26:23.041 | 1.00th=[ 21], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 26], 00:26:23.041 | 30.00th=[ 30], 40.00th=[ 52], 50.00th=[ 102], 60.00th=[ 155], 00:26:23.041 | 70.00th=[ 249], 80.00th=[ 359], 90.00th=[ 531], 95.00th=[ 651], 00:26:23.041 | 99.00th=[ 776], 99.50th=[ 810], 99.90th=[ 844], 99.95th=[ 869], 00:26:23.041 | 99.99th=[ 877] 00:26:23.041 bw ( KiB/s): min=17920, max=482304, per=8.87%, avg=84377.60, stdev=116224.37, samples=20 00:26:23.041 iops : min= 70, max= 1884, avg=329.60, stdev=454.00, samples=20 00:26:23.041 lat (msec) : 20=0.06%, 50=39.29%, 100=10.54%, 250=20.15%, 500=18.63% 00:26:23.041 lat (msec) : 750=10.03%, 1000=1.31% 00:26:23.041 cpu : usr=0.06%, sys=1.22%, ctx=547, majf=0, minf=4097 00:26:23.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:23.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.041 issued rwts: total=3360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.041 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.041 job1: (groupid=0, jobs=1): err= 0: pid=330987: Sat Dec 14 03:07:36 2024 00:26:23.041 read: IOPS=191, BW=47.8MiB/s (50.1MB/s)(484MiB/10127msec) 00:26:23.041 slat (usec): min=9, max=325562, avg=2878.97, stdev=16638.47 00:26:23.042 clat (usec): min=1072, max=1012.6k, avg=331385.90, stdev=233278.47 00:26:23.042 lat (usec): min=1241, max=1065.4k, avg=334264.88, stdev=235299.29 00:26:23.042 clat percentiles (msec): 00:26:23.042 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 14], 20.00th=[ 95], 00:26:23.042 | 30.00th=[ 176], 40.00th=[ 279], 50.00th=[ 342], 60.00th=[ 376], 00:26:23.042 | 70.00th=[ 435], 80.00th=[ 527], 90.00th=[ 651], 95.00th=[ 776], 00:26:23.042 | 99.00th=[ 919], 99.50th=[ 978], 99.90th=[ 1011], 99.95th=[ 1011], 00:26:23.042 | 99.99th=[ 1011] 00:26:23.042 bw ( KiB/s): min=12800, max=103936, per=5.04%, avg=47948.80, stdev=24646.23, samples=20 00:26:23.042 iops : min= 50, max= 406, avg=187.30, stdev=96.27, samples=20 00:26:23.042 lat (msec) : 2=0.21%, 4=2.07%, 10=5.06%, 20=4.59%, 50=3.87% 00:26:23.042 lat (msec) : 100=4.80%, 250=17.86%, 500=39.55%, 750=16.57%, 1000=5.16% 00:26:23.042 lat (msec) : 2000=0.26% 00:26:23.042 cpu : usr=0.04%, sys=0.73%, ctx=530, majf=0, minf=4097 00:26:23.042 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:26:23.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.042 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.042 issued rwts: total=1937,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.042 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.042 job2: (groupid=0, jobs=1): err= 0: pid=330988: Sat Dec 14 03:07:36 2024 00:26:23.042 read: IOPS=186, BW=46.7MiB/s (49.0MB/s)(473MiB/10125msec) 00:26:23.042 slat (usec): min=14, max=261906, avg=4139.20, stdev=20283.24 00:26:23.042 clat (usec): min=1849, max=992445, avg=338183.69, stdev=229086.86 00:26:23.042 lat (msec): min=2, max=992, avg=342.32, stdev=231.53 00:26:23.042 clat percentiles (msec): 00:26:23.042 | 1.00th=[ 11], 5.00th=[ 26], 10.00th=[ 52], 20.00th=[ 105], 00:26:23.042 | 30.00th=[ 161], 40.00th=[ 241], 50.00th=[ 321], 60.00th=[ 405], 00:26:23.042 | 70.00th=[ 460], 80.00th=[ 567], 90.00th=[ 634], 95.00th=[ 751], 00:26:23.042 | 99.00th=[ 869], 99.50th=[ 894], 99.90th=[ 995], 99.95th=[ 995], 00:26:23.042 | 99.99th=[ 995] 00:26:23.042 bw ( KiB/s): min=14848, max=137216, per=4.92%, avg=46774.40, stdev=35510.40, samples=20 00:26:23.042 iops : min= 58, max= 536, avg=182.70, stdev=138.72, samples=20 00:26:23.042 lat (msec) : 2=0.05%, 4=0.11%, 10=0.53%, 20=3.17%, 50=5.39% 00:26:23.042 lat (msec) : 100=10.58%, 250=22.00%, 500=32.68%, 750=20.99%, 1000=4.49% 00:26:23.042 cpu : usr=0.04%, sys=0.75%, ctx=333, majf=0, minf=4097 00:26:23.042 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:26:23.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.042 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.042 issued rwts: total=1891,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.042 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.042 job3: (groupid=0, jobs=1): err= 0: pid=330994: Sat Dec 14 03:07:36 2024 00:26:23.042 read: IOPS=538, BW=135MiB/s (141MB/s)(1350MiB/10029msec) 00:26:23.042 slat (usec): min=15, max=157066, avg=1353.03, stdev=7408.62 00:26:23.042 clat (usec): min=631, max=724467, avg=117441.11, stdev=124127.63 00:26:23.042 lat (usec): min=655, max=735296, avg=118794.14, stdev=125358.86 00:26:23.042 clat percentiles (usec): 00:26:23.042 | 1.00th=[ 1942], 5.00th=[ 28705], 10.00th=[ 39060], 20.00th=[ 41681], 00:26:23.042 | 30.00th=[ 43779], 40.00th=[ 46400], 50.00th=[ 54264], 60.00th=[ 81265], 00:26:23.042 | 70.00th=[139461], 80.00th=[191890], 90.00th=[240124], 95.00th=[413139], 00:26:23.042 | 99.00th=[616563], 99.50th=[658506], 99.90th=[717226], 99.95th=[717226], 00:26:23.042 | 99.99th=[725615] 00:26:23.042 bw ( KiB/s): min=24576, max=369152, per=14.36%, avg=136580.00, stdev=116262.32, samples=20 00:26:23.042 iops : min= 96, max= 1442, avg=533.50, stdev=454.16, samples=20 00:26:23.042 lat (usec) : 750=0.04%, 1000=0.39% 00:26:23.042 lat (msec) : 2=0.57%, 4=0.63%, 10=1.20%, 20=0.82%, 50=43.48% 00:26:23.042 lat (msec) : 100=17.25%, 250=26.27%, 500=6.50%, 750=2.85% 00:26:23.042 cpu : usr=0.18%, sys=1.78%, ctx=1102, majf=0, minf=4097 00:26:23.042 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:23.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.042 issued rwts: total=5398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.042 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.042 job4: (groupid=0, jobs=1): err= 0: pid=330996: Sat Dec 14 03:07:36 2024 00:26:23.042 read: IOPS=212, BW=53.0MiB/s (55.6MB/s)(537MiB/10131msec) 00:26:23.042 slat (usec): min=15, max=327366, avg=3693.20, stdev=18899.52 00:26:23.042 clat (msec): min=8, max=1029, avg=297.82, stdev=245.26 00:26:23.042 lat (msec): min=9, max=1029, avg=301.52, stdev=248.74 00:26:23.042 clat percentiles (msec): 00:26:23.042 | 1.00th=[ 13], 5.00th=[ 18], 10.00th=[ 33], 20.00th=[ 40], 00:26:23.042 | 30.00th=[ 74], 40.00th=[ 136], 50.00th=[ 288], 60.00th=[ 388], 00:26:23.042 | 70.00th=[ 443], 80.00th=[ 531], 90.00th=[ 651], 95.00th=[ 726], 00:26:23.042 | 99.00th=[ 885], 99.50th=[ 894], 99.90th=[ 894], 99.95th=[ 894], 00:26:23.042 | 99.99th=[ 1028] 00:26:23.042 bw ( KiB/s): min=18432, max=193024, per=5.61%, avg=53350.40, stdev=48341.84, samples=20 00:26:23.042 iops : min= 72, max= 754, avg=208.40, stdev=188.84, samples=20 00:26:23.042 lat (msec) : 10=0.19%, 20=6.38%, 50=16.43%, 100=12.43%, 250=13.08% 00:26:23.042 lat (msec) : 500=29.98%, 750=17.32%, 1000=4.14%, 2000=0.05% 00:26:23.042 cpu : usr=0.08%, sys=0.76%, ctx=533, majf=0, minf=4097 00:26:23.042 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:23.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.042 issued rwts: total=2148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.042 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.042 job5: (groupid=0, jobs=1): err= 0: pid=331000: Sat Dec 14 03:07:36 2024 00:26:23.042 read: IOPS=214, BW=53.7MiB/s (56.3MB/s)(544MiB/10126msec) 00:26:23.042 slat (usec): min=13, max=377134, avg=2175.22, stdev=15999.12 00:26:23.042 clat (msec): min=3, max=953, avg=295.46, stdev=244.00 00:26:23.042 lat (msec): min=3, max=953, avg=297.63, stdev=246.22 00:26:23.042 clat percentiles (msec): 00:26:23.042 | 1.00th=[ 11], 5.00th=[ 26], 10.00th=[ 38], 20.00th=[ 65], 00:26:23.042 | 30.00th=[ 101], 40.00th=[ 140], 50.00th=[ 228], 60.00th=[ 338], 00:26:23.042 | 70.00th=[ 426], 80.00th=[ 542], 90.00th=[ 651], 95.00th=[ 735], 00:26:23.042 | 99.00th=[ 885], 99.50th=[ 936], 99.90th=[ 953], 99.95th=[ 953], 00:26:23.042 | 99.99th=[ 953] 00:26:23.042 bw ( KiB/s): min=16384, max=135168, per=5.68%, avg=54043.20, stdev=36834.00, samples=20 00:26:23.042 iops : min= 64, max= 528, avg=211.10, stdev=143.89, samples=20 00:26:23.042 lat (msec) : 4=0.05%, 10=0.87%, 20=2.07%, 50=12.41%, 100=14.76% 00:26:23.042 lat (msec) : 250=21.10%, 500=24.46%, 750=19.77%, 1000=4.51% 00:26:23.042 cpu : usr=0.13%, sys=0.80%, ctx=465, majf=0, minf=4097 00:26:23.042 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:23.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.042 issued rwts: total=2175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.042 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.042 job6: (groupid=0, jobs=1): err= 0: pid=331001: Sat Dec 14 03:07:36 2024 00:26:23.042 read: IOPS=207, BW=52.0MiB/s (54.5MB/s)(527MiB/10129msec) 00:26:23.042 slat (usec): min=9, max=219506, avg=3406.07, stdev=16279.89 00:26:23.042 clat (msec): min=5, max=974, avg=304.12, stdev=222.06 00:26:23.042 lat (msec): min=5, max=974, avg=307.53, stdev=224.94 00:26:23.042 clat percentiles (msec): 00:26:23.042 | 1.00th=[ 9], 5.00th=[ 30], 10.00th=[ 78], 20.00th=[ 126], 00:26:23.042 | 30.00th=[ 155], 40.00th=[ 184], 50.00th=[ 218], 60.00th=[ 266], 00:26:23.042 | 70.00th=[ 435], 80.00th=[ 542], 90.00th=[ 634], 95.00th=[ 751], 00:26:23.042 | 99.00th=[ 835], 99.50th=[ 911], 99.90th=[ 953], 99.95th=[ 978], 00:26:23.042 | 99.99th=[ 978] 00:26:23.042 bw ( KiB/s): min=15872, max=136704, per=5.50%, avg=52282.15, stdev=33021.39, samples=20 00:26:23.042 iops : min= 62, max= 534, avg=204.20, stdev=128.97, samples=20 00:26:23.042 lat (msec) : 10=1.28%, 20=1.09%, 50=6.17%, 100=6.22%, 250=43.07% 00:26:23.042 lat (msec) : 500=18.14%, 750=18.95%, 1000=5.08% 00:26:23.042 cpu : usr=0.07%, sys=0.68%, ctx=480, majf=0, minf=4097 00:26:23.042 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:26:23.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.042 issued rwts: total=2106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.042 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.042 job7: (groupid=0, jobs=1): err= 0: pid=331002: Sat Dec 14 03:07:36 2024 00:26:23.042 read: IOPS=493, BW=123MiB/s (129MB/s)(1250MiB/10137msec) 00:26:23.042 slat (usec): min=11, max=143156, avg=1834.96, stdev=8332.28 00:26:23.042 clat (usec): min=1000, max=884817, avg=127794.01, stdev=141425.41 00:26:23.042 lat (usec): min=1024, max=944948, avg=129628.97, stdev=143550.52 00:26:23.042 clat percentiles (msec): 00:26:23.042 | 1.00th=[ 3], 5.00th=[ 16], 10.00th=[ 28], 20.00th=[ 35], 00:26:23.042 | 30.00th=[ 45], 40.00th=[ 52], 50.00th=[ 66], 60.00th=[ 97], 00:26:23.042 | 70.00th=[ 144], 80.00th=[ 203], 90.00th=[ 330], 95.00th=[ 405], 00:26:23.042 | 99.00th=[ 768], 99.50th=[ 802], 99.90th=[ 885], 99.95th=[ 885], 00:26:23.042 | 99.99th=[ 885] 00:26:23.042 bw ( KiB/s): min=20480, max=382464, per=13.29%, avg=126336.00, stdev=103820.23, samples=20 00:26:23.042 iops : min= 80, max= 1494, avg=493.50, stdev=405.55, samples=20 00:26:23.042 lat (msec) : 2=0.82%, 4=1.40%, 10=1.56%, 20=2.60%, 50=32.37% 00:26:23.042 lat (msec) : 100=21.60%, 250=25.55%, 500=11.70%, 750=1.36%, 1000=1.04% 00:26:23.042 cpu : usr=0.15%, sys=1.58%, ctx=947, majf=0, minf=3722 00:26:23.042 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:26:23.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.042 issued rwts: total=4999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.042 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.042 job8: (groupid=0, jobs=1): err= 0: pid=331003: Sat Dec 14 03:07:36 2024 00:26:23.042 read: IOPS=240, BW=60.2MiB/s (63.2MB/s)(611MiB/10135msec) 00:26:23.042 slat (usec): min=11, max=535065, avg=3304.30, stdev=19448.96 00:26:23.042 clat (usec): min=935, max=1025.6k, avg=261998.54, stdev=218768.77 00:26:23.042 lat (usec): min=984, max=1025.6k, avg=265302.83, stdev=221596.78 00:26:23.042 clat percentiles (msec): 00:26:23.042 | 1.00th=[ 3], 5.00th=[ 31], 10.00th=[ 43], 20.00th=[ 89], 00:26:23.042 | 30.00th=[ 144], 40.00th=[ 163], 50.00th=[ 197], 60.00th=[ 224], 00:26:23.042 | 70.00th=[ 271], 80.00th=[ 426], 90.00th=[ 609], 95.00th=[ 768], 00:26:23.043 | 99.00th=[ 986], 99.50th=[ 986], 99.90th=[ 1028], 99.95th=[ 1028], 00:26:23.043 | 99.99th=[ 1028] 00:26:23.043 bw ( KiB/s): min=15872, max=150016, per=6.40%, avg=60902.40, stdev=37914.02, samples=20 00:26:23.043 iops : min= 62, max= 586, avg=237.90, stdev=148.10, samples=20 00:26:23.043 lat (usec) : 1000=0.04% 00:26:23.043 lat (msec) : 2=0.86%, 4=0.25%, 10=1.19%, 20=0.57%, 50=11.55% 00:26:23.043 lat (msec) : 100=7.33%, 250=44.96%, 500=18.63%, 750=9.50%, 1000=5.00% 00:26:23.043 lat (msec) : 2000=0.12% 00:26:23.043 cpu : usr=0.09%, sys=0.94%, ctx=641, majf=0, minf=4097 00:26:23.043 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:23.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.043 issued rwts: total=2442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.043 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.043 job9: (groupid=0, jobs=1): err= 0: pid=331004: Sat Dec 14 03:07:36 2024 00:26:23.043 read: IOPS=265, BW=66.5MiB/s (69.7MB/s)(674MiB/10132msec) 00:26:23.043 slat (usec): min=11, max=223193, avg=2002.80, stdev=12111.34 00:26:23.043 clat (usec): min=1212, max=1096.5k, avg=238449.80, stdev=230391.63 00:26:23.043 lat (usec): min=1265, max=1096.6k, avg=240452.60, stdev=232118.77 00:26:23.043 clat percentiles (msec): 00:26:23.043 | 1.00th=[ 6], 5.00th=[ 11], 10.00th=[ 20], 20.00th=[ 44], 00:26:23.043 | 30.00th=[ 75], 40.00th=[ 108], 50.00th=[ 153], 60.00th=[ 220], 00:26:23.043 | 70.00th=[ 317], 80.00th=[ 384], 90.00th=[ 651], 95.00th=[ 743], 00:26:23.043 | 99.00th=[ 869], 99.50th=[ 885], 99.90th=[ 978], 99.95th=[ 1099], 00:26:23.043 | 99.99th=[ 1099] 00:26:23.043 bw ( KiB/s): min=16384, max=233472, per=7.08%, avg=67346.90, stdev=61015.93, samples=20 00:26:23.043 iops : min= 64, max= 912, avg=263.05, stdev=238.29, samples=20 00:26:23.043 lat (msec) : 2=0.15%, 4=0.33%, 10=4.01%, 20=5.68%, 50=12.14% 00:26:23.043 lat (msec) : 100=15.48%, 250=25.91%, 500=20.97%, 750=10.65%, 1000=4.60% 00:26:23.043 lat (msec) : 2000=0.07% 00:26:23.043 cpu : usr=0.06%, sys=0.96%, ctx=713, majf=0, minf=4097 00:26:23.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:23.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.043 issued rwts: total=2694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.043 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.043 job10: (groupid=0, jobs=1): err= 0: pid=331005: Sat Dec 14 03:07:36 2024 00:26:23.043 read: IOPS=847, BW=212MiB/s (222MB/s)(2126MiB/10031msec) 00:26:23.043 slat (usec): min=10, max=142316, avg=974.17, stdev=5353.76 00:26:23.043 clat (msec): min=17, max=778, avg=74.44, stdev=105.06 00:26:23.043 lat (msec): min=17, max=779, avg=75.42, stdev=106.01 00:26:23.043 clat percentiles (msec): 00:26:23.043 | 1.00th=[ 23], 5.00th=[ 25], 10.00th=[ 26], 20.00th=[ 29], 00:26:23.043 | 30.00th=[ 32], 40.00th=[ 34], 50.00th=[ 39], 60.00th=[ 48], 00:26:23.043 | 70.00th=[ 58], 80.00th=[ 83], 90.00th=[ 130], 95.00th=[ 305], 00:26:23.043 | 99.00th=[ 584], 99.50th=[ 617], 99.90th=[ 701], 99.95th=[ 776], 00:26:23.043 | 99.99th=[ 776] 00:26:23.043 bw ( KiB/s): min=26112, max=528896, per=22.72%, avg=216092.20, stdev=178783.97, samples=20 00:26:23.043 iops : min= 102, max= 2066, avg=844.10, stdev=698.39, samples=20 00:26:23.043 lat (msec) : 20=0.49%, 50=62.46%, 100=22.26%, 250=9.28%, 500=2.85% 00:26:23.043 lat (msec) : 750=2.58%, 1000=0.08% 00:26:23.043 cpu : usr=0.26%, sys=3.23%, ctx=1071, majf=0, minf=4097 00:26:23.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:23.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.043 issued rwts: total=8504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.043 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.043 00:26:23.043 Run status group 0 (all jobs): 00:26:23.043 READ: bw=929MiB/s (974MB/s), 46.7MiB/s-212MiB/s (49.0MB/s-222MB/s), io=9414MiB (9871MB), run=10029-10137msec 00:26:23.043 00:26:23.043 Disk stats (read/write): 00:26:23.043 nvme0n1: ios=6570/0, merge=0/0, ticks=1214834/0, in_queue=1214834, util=94.95% 00:26:23.043 nvme10n1: ios=3709/0, merge=0/0, ticks=1226300/0, in_queue=1226300, util=95.35% 00:26:23.043 nvme1n1: ios=3605/0, merge=0/0, ticks=1215735/0, in_queue=1215735, util=95.97% 00:26:23.043 nvme2n1: ios=10467/0, merge=0/0, ticks=1242234/0, in_queue=1242234, util=96.30% 00:26:23.043 nvme3n1: ios=4142/0, merge=0/0, ticks=1216148/0, in_queue=1216148, util=96.51% 00:26:23.043 nvme4n1: ios=4223/0, merge=0/0, ticks=1226050/0, in_queue=1226050, util=97.30% 00:26:23.043 nvme5n1: ios=4032/0, merge=0/0, ticks=1220688/0, in_queue=1220688, util=97.65% 00:26:23.043 nvme6n1: ios=9859/0, merge=0/0, ticks=1189649/0, in_queue=1189649, util=97.97% 00:26:23.043 nvme7n1: ios=4705/0, merge=0/0, ticks=1199038/0, in_queue=1199038, util=98.91% 00:26:23.043 nvme8n1: ios=5266/0, merge=0/0, ticks=1199034/0, in_queue=1199034, util=99.11% 00:26:23.043 nvme9n1: ios=16717/0, merge=0/0, ticks=1244886/0, in_queue=1244886, util=99.25% 00:26:23.043 03:07:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:23.043 [global] 00:26:23.043 thread=1 00:26:23.043 invalidate=1 00:26:23.043 rw=randwrite 00:26:23.043 time_based=1 00:26:23.043 runtime=10 00:26:23.043 ioengine=libaio 00:26:23.043 direct=1 00:26:23.043 bs=262144 00:26:23.043 iodepth=64 00:26:23.043 norandommap=1 00:26:23.043 numjobs=1 00:26:23.043 00:26:23.043 [job0] 00:26:23.043 filename=/dev/nvme0n1 00:26:23.043 [job1] 00:26:23.043 filename=/dev/nvme10n1 00:26:23.043 [job2] 00:26:23.043 filename=/dev/nvme1n1 00:26:23.043 [job3] 00:26:23.043 filename=/dev/nvme2n1 00:26:23.043 [job4] 00:26:23.043 filename=/dev/nvme3n1 00:26:23.043 [job5] 00:26:23.043 filename=/dev/nvme4n1 00:26:23.043 [job6] 00:26:23.043 filename=/dev/nvme5n1 00:26:23.043 [job7] 00:26:23.043 filename=/dev/nvme6n1 00:26:23.043 [job8] 00:26:23.043 filename=/dev/nvme7n1 00:26:23.043 [job9] 00:26:23.043 filename=/dev/nvme8n1 00:26:23.043 [job10] 00:26:23.043 filename=/dev/nvme9n1 00:26:23.043 Could not set queue depth (nvme0n1) 00:26:23.043 Could not set queue depth (nvme10n1) 00:26:23.043 Could not set queue depth (nvme1n1) 00:26:23.043 Could not set queue depth (nvme2n1) 00:26:23.043 Could not set queue depth (nvme3n1) 00:26:23.043 Could not set queue depth (nvme4n1) 00:26:23.043 Could not set queue depth (nvme5n1) 00:26:23.043 Could not set queue depth (nvme6n1) 00:26:23.043 Could not set queue depth (nvme7n1) 00:26:23.043 Could not set queue depth (nvme8n1) 00:26:23.043 Could not set queue depth (nvme9n1) 00:26:23.043 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.043 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.043 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.043 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.043 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.043 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.043 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.043 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.043 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.043 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.043 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.043 fio-3.35 00:26:23.043 Starting 11 threads 00:26:33.023 00:26:33.023 job0: (groupid=0, jobs=1): err= 0: pid=331312: Sat Dec 14 03:07:47 2024 00:26:33.023 write: IOPS=247, BW=61.8MiB/s (64.8MB/s)(632MiB/10216msec); 0 zone resets 00:26:33.023 slat (usec): min=27, max=124117, avg=3950.64, stdev=8279.40 00:26:33.023 clat (msec): min=108, max=607, avg=254.55, stdev=115.80 00:26:33.023 lat (msec): min=115, max=607, avg=258.50, stdev=117.29 00:26:33.023 clat percentiles (msec): 00:26:33.023 | 1.00th=[ 115], 5.00th=[ 121], 10.00th=[ 126], 20.00th=[ 132], 00:26:33.023 | 30.00th=[ 136], 40.00th=[ 182], 50.00th=[ 251], 60.00th=[ 300], 00:26:33.023 | 70.00th=[ 347], 80.00th=[ 380], 90.00th=[ 405], 95.00th=[ 430], 00:26:33.023 | 99.00th=[ 468], 99.50th=[ 535], 99.90th=[ 584], 99.95th=[ 609], 00:26:33.023 | 99.99th=[ 609] 00:26:33.023 bw ( KiB/s): min=36864, max=133120, per=5.70%, avg=63078.40, stdev=31315.43, samples=20 00:26:33.023 iops : min= 144, max= 520, avg=246.40, stdev=122.33, samples=20 00:26:33.023 lat (msec) : 250=49.94%, 500=49.35%, 750=0.71% 00:26:33.023 cpu : usr=0.62%, sys=0.88%, ctx=622, majf=0, minf=1 00:26:33.023 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:33.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.023 issued rwts: total=0,2527,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.023 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.023 job1: (groupid=0, jobs=1): err= 0: pid=331313: Sat Dec 14 03:07:47 2024 00:26:33.023 write: IOPS=320, BW=80.2MiB/s (84.1MB/s)(820MiB/10221msec); 0 zone resets 00:26:33.023 slat (usec): min=27, max=67728, avg=2538.67, stdev=6440.90 00:26:33.023 clat (usec): min=1447, max=595433, avg=196854.43, stdev=119846.19 00:26:33.023 lat (usec): min=1513, max=595477, avg=199393.10, stdev=121683.89 00:26:33.023 clat percentiles (msec): 00:26:33.023 | 1.00th=[ 4], 5.00th=[ 34], 10.00th=[ 75], 20.00th=[ 111], 00:26:33.023 | 30.00th=[ 126], 40.00th=[ 134], 50.00th=[ 140], 60.00th=[ 186], 00:26:33.024 | 70.00th=[ 253], 80.00th=[ 338], 90.00th=[ 388], 95.00th=[ 405], 00:26:33.024 | 99.00th=[ 456], 99.50th=[ 498], 99.90th=[ 575], 99.95th=[ 592], 00:26:33.024 | 99.99th=[ 592] 00:26:33.024 bw ( KiB/s): min=38912, max=143360, per=7.44%, avg=82304.00, stdev=41312.13, samples=20 00:26:33.024 iops : min= 152, max= 560, avg=321.50, stdev=161.38, samples=20 00:26:33.024 lat (msec) : 2=0.09%, 4=0.95%, 10=1.71%, 20=0.64%, 50=2.62% 00:26:33.024 lat (msec) : 100=11.34%, 250=52.42%, 500=29.80%, 750=0.43% 00:26:33.024 cpu : usr=0.68%, sys=1.04%, ctx=1552, majf=0, minf=1 00:26:33.024 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:33.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.024 issued rwts: total=0,3279,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.024 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.024 job2: (groupid=0, jobs=1): err= 0: pid=331314: Sat Dec 14 03:07:47 2024 00:26:33.024 write: IOPS=449, BW=112MiB/s (118MB/s)(1129MiB/10042msec); 0 zone resets 00:26:33.024 slat (usec): min=20, max=64114, avg=1887.97, stdev=4508.21 00:26:33.024 clat (msec): min=7, max=404, avg=140.39, stdev=88.00 00:26:33.024 lat (msec): min=10, max=404, avg=142.28, stdev=89.00 00:26:33.024 clat percentiles (msec): 00:26:33.024 | 1.00th=[ 42], 5.00th=[ 61], 10.00th=[ 69], 20.00th=[ 74], 00:26:33.024 | 30.00th=[ 79], 40.00th=[ 84], 50.00th=[ 99], 60.00th=[ 113], 00:26:33.024 | 70.00th=[ 167], 80.00th=[ 232], 90.00th=[ 284], 95.00th=[ 309], 00:26:33.024 | 99.00th=[ 380], 99.50th=[ 397], 99.90th=[ 405], 99.95th=[ 405], 00:26:33.024 | 99.99th=[ 405] 00:26:33.024 bw ( KiB/s): min=43008, max=227840, per=10.31%, avg=113996.80, stdev=62649.77, samples=20 00:26:33.024 iops : min= 168, max= 890, avg=445.30, stdev=244.73, samples=20 00:26:33.024 lat (msec) : 10=0.02%, 20=0.22%, 50=3.30%, 100=50.71%, 250=28.63% 00:26:33.024 lat (msec) : 500=17.12% 00:26:33.024 cpu : usr=0.98%, sys=1.21%, ctx=1459, majf=0, minf=1 00:26:33.024 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:33.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.024 issued rwts: total=0,4516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.024 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.024 job3: (groupid=0, jobs=1): err= 0: pid=331326: Sat Dec 14 03:07:47 2024 00:26:33.024 write: IOPS=308, BW=77.1MiB/s (80.9MB/s)(780MiB/10119msec); 0 zone resets 00:26:33.024 slat (usec): min=20, max=160599, avg=2886.74, stdev=7447.94 00:26:33.024 clat (usec): min=859, max=552493, avg=204533.69, stdev=128207.79 00:26:33.024 lat (usec): min=911, max=552534, avg=207420.43, stdev=129744.90 00:26:33.024 clat percentiles (msec): 00:26:33.024 | 1.00th=[ 4], 5.00th=[ 16], 10.00th=[ 73], 20.00th=[ 82], 00:26:33.024 | 30.00th=[ 92], 40.00th=[ 140], 50.00th=[ 192], 60.00th=[ 253], 00:26:33.024 | 70.00th=[ 288], 80.00th=[ 330], 90.00th=[ 388], 95.00th=[ 414], 00:26:33.024 | 99.00th=[ 477], 99.50th=[ 489], 99.90th=[ 506], 99.95th=[ 550], 00:26:33.024 | 99.99th=[ 550] 00:26:33.024 bw ( KiB/s): min=38912, max=176640, per=7.08%, avg=78284.80, stdev=43504.59, samples=20 00:26:33.024 iops : min= 152, max= 690, avg=305.80, stdev=169.94, samples=20 00:26:33.024 lat (usec) : 1000=0.10% 00:26:33.024 lat (msec) : 2=0.29%, 4=1.03%, 10=1.73%, 20=3.68%, 50=2.37% 00:26:33.024 lat (msec) : 100=22.30%, 250=27.94%, 500=40.40%, 750=0.16% 00:26:33.024 cpu : usr=0.64%, sys=0.94%, ctx=1189, majf=0, minf=1 00:26:33.024 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:33.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.024 issued rwts: total=0,3121,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.024 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.024 job4: (groupid=0, jobs=1): err= 0: pid=331327: Sat Dec 14 03:07:47 2024 00:26:33.024 write: IOPS=386, BW=96.6MiB/s (101MB/s)(988MiB/10224msec); 0 zone resets 00:26:33.024 slat (usec): min=25, max=75512, avg=2102.02, stdev=5458.00 00:26:33.024 clat (msec): min=7, max=609, avg=163.41, stdev=109.67 00:26:33.024 lat (msec): min=7, max=609, avg=165.52, stdev=111.14 00:26:33.024 clat percentiles (msec): 00:26:33.024 | 1.00th=[ 21], 5.00th=[ 53], 10.00th=[ 68], 20.00th=[ 74], 00:26:33.024 | 30.00th=[ 79], 40.00th=[ 96], 50.00th=[ 106], 60.00th=[ 157], 00:26:33.024 | 70.00th=[ 226], 80.00th=[ 266], 90.00th=[ 334], 95.00th=[ 380], 00:26:33.024 | 99.00th=[ 435], 99.50th=[ 485], 99.90th=[ 584], 99.95th=[ 609], 00:26:33.024 | 99.99th=[ 609] 00:26:33.024 bw ( KiB/s): min=36864, max=223232, per=8.99%, avg=99481.60, stdev=60674.44, samples=20 00:26:33.024 iops : min= 144, max= 872, avg=388.60, stdev=237.01, samples=20 00:26:33.024 lat (msec) : 10=0.05%, 20=0.89%, 50=3.29%, 100=41.90%, 250=30.23% 00:26:33.024 lat (msec) : 500=23.19%, 750=0.46% 00:26:33.024 cpu : usr=0.77%, sys=1.27%, ctx=1713, majf=0, minf=1 00:26:33.024 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:33.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.024 issued rwts: total=0,3950,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.024 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.024 job5: (groupid=0, jobs=1): err= 0: pid=331328: Sat Dec 14 03:07:47 2024 00:26:33.024 write: IOPS=842, BW=211MiB/s (221MB/s)(2133MiB/10121msec); 0 zone resets 00:26:33.024 slat (usec): min=19, max=59976, avg=858.14, stdev=2706.09 00:26:33.024 clat (usec): min=736, max=415183, avg=74915.40, stdev=68966.53 00:26:33.024 lat (usec): min=779, max=415223, avg=75773.54, stdev=69618.28 00:26:33.024 clat percentiles (msec): 00:26:33.024 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 24], 20.00th=[ 39], 00:26:33.024 | 30.00th=[ 44], 40.00th=[ 46], 50.00th=[ 52], 60.00th=[ 55], 00:26:33.024 | 70.00th=[ 64], 80.00th=[ 108], 90.00th=[ 157], 95.00th=[ 234], 00:26:33.024 | 99.00th=[ 351], 99.50th=[ 376], 99.90th=[ 414], 99.95th=[ 414], 00:26:33.024 | 99.99th=[ 414] 00:26:33.024 bw ( KiB/s): min=60416, max=362496, per=19.60%, avg=216780.80, stdev=108551.80, samples=20 00:26:33.024 iops : min= 236, max= 1416, avg=846.80, stdev=424.03, samples=20 00:26:33.024 lat (usec) : 750=0.01%, 1000=0.07% 00:26:33.024 lat (msec) : 2=0.23%, 4=1.36%, 10=2.39%, 20=3.89%, 50=40.04% 00:26:33.024 lat (msec) : 100=30.70%, 250=17.25%, 500=4.06% 00:26:33.024 cpu : usr=1.56%, sys=2.64%, ctx=3917, majf=0, minf=1 00:26:33.024 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:33.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.024 issued rwts: total=0,8532,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.024 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.024 job6: (groupid=0, jobs=1): err= 0: pid=331329: Sat Dec 14 03:07:47 2024 00:26:33.024 write: IOPS=321, BW=80.4MiB/s (84.4MB/s)(822MiB/10218msec); 0 zone resets 00:26:33.024 slat (usec): min=22, max=108595, avg=2342.92, stdev=7116.86 00:26:33.024 clat (usec): min=747, max=605595, avg=196419.47, stdev=150725.94 00:26:33.024 lat (usec): min=800, max=605633, avg=198762.39, stdev=152679.61 00:26:33.024 clat percentiles (usec): 00:26:33.024 | 1.00th=[ 1647], 5.00th=[ 3097], 10.00th=[ 4752], 20.00th=[ 13698], 00:26:33.024 | 30.00th=[ 63177], 40.00th=[122160], 50.00th=[223347], 60.00th=[274727], 00:26:33.024 | 70.00th=[304088], 80.00th=[354419], 90.00th=[383779], 95.00th=[408945], 00:26:33.024 | 99.00th=[459277], 99.50th=[505414], 99.90th=[583009], 99.95th=[608175], 00:26:33.024 | 99.99th=[608175] 00:26:33.024 bw ( KiB/s): min=36864, max=216064, per=7.46%, avg=82560.00, stdev=49902.45, samples=20 00:26:33.024 iops : min= 144, max= 844, avg=322.50, stdev=194.93, samples=20 00:26:33.024 lat (usec) : 750=0.03%, 1000=0.15% 00:26:33.024 lat (msec) : 2=1.34%, 4=6.72%, 10=8.82%, 20=6.54%, 50=5.05% 00:26:33.024 lat (msec) : 100=8.58%, 250=16.67%, 500=45.56%, 750=0.55% 00:26:33.024 cpu : usr=0.67%, sys=1.05%, ctx=2104, majf=0, minf=2 00:26:33.024 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:33.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.024 issued rwts: total=0,3288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.024 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.024 job7: (groupid=0, jobs=1): err= 0: pid=331330: Sat Dec 14 03:07:47 2024 00:26:33.024 write: IOPS=428, BW=107MiB/s (112MB/s)(1095MiB/10222msec); 0 zone resets 00:26:33.024 slat (usec): min=22, max=31421, avg=1775.60, stdev=4707.64 00:26:33.024 clat (msec): min=2, max=601, avg=147.53, stdev=106.97 00:26:33.024 lat (msec): min=3, max=601, avg=149.30, stdev=108.12 00:26:33.024 clat percentiles (msec): 00:26:33.024 | 1.00th=[ 13], 5.00th=[ 43], 10.00th=[ 51], 20.00th=[ 75], 00:26:33.024 | 30.00th=[ 82], 40.00th=[ 92], 50.00th=[ 97], 60.00th=[ 121], 00:26:33.024 | 70.00th=[ 171], 80.00th=[ 239], 90.00th=[ 309], 95.00th=[ 393], 00:26:33.024 | 99.00th=[ 451], 99.50th=[ 468], 99.90th=[ 575], 99.95th=[ 592], 00:26:33.024 | 99.99th=[ 600] 00:26:33.024 bw ( KiB/s): min=36864, max=248320, per=9.99%, avg=110464.00, stdev=63071.09, samples=20 00:26:33.024 iops : min= 144, max= 970, avg=431.50, stdev=246.37, samples=20 00:26:33.024 lat (msec) : 4=0.16%, 10=0.59%, 20=0.94%, 50=8.24%, 100=42.89% 00:26:33.024 lat (msec) : 250=29.98%, 500=16.81%, 750=0.39% 00:26:33.024 cpu : usr=0.92%, sys=1.16%, ctx=1904, majf=0, minf=1 00:26:33.024 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:33.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.024 issued rwts: total=0,4379,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.024 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.024 job8: (groupid=0, jobs=1): err= 0: pid=331331: Sat Dec 14 03:07:47 2024 00:26:33.024 write: IOPS=305, BW=76.4MiB/s (80.2MB/s)(781MiB/10220msec); 0 zone resets 00:26:33.024 slat (usec): min=22, max=115078, avg=2193.38, stdev=6118.89 00:26:33.024 clat (usec): min=1004, max=605071, avg=206997.44, stdev=123167.73 00:26:33.024 lat (usec): min=1067, max=605124, avg=209190.82, stdev=124321.50 00:26:33.024 clat percentiles (msec): 00:26:33.024 | 1.00th=[ 6], 5.00th=[ 45], 10.00th=[ 65], 20.00th=[ 118], 00:26:33.024 | 30.00th=[ 128], 40.00th=[ 136], 50.00th=[ 167], 60.00th=[ 209], 00:26:33.024 | 70.00th=[ 268], 80.00th=[ 334], 90.00th=[ 397], 95.00th=[ 426], 00:26:33.024 | 99.00th=[ 502], 99.50th=[ 531], 99.90th=[ 584], 99.95th=[ 609], 00:26:33.024 | 99.99th=[ 609] 00:26:33.024 bw ( KiB/s): min=37888, max=154624, per=7.08%, avg=78361.60, stdev=33476.33, samples=20 00:26:33.024 iops : min= 148, max= 604, avg=306.10, stdev=130.77, samples=20 00:26:33.024 lat (msec) : 2=0.29%, 4=0.38%, 10=0.86%, 20=1.25%, 50=3.26% 00:26:33.024 lat (msec) : 100=9.38%, 250=51.26%, 500=32.26%, 750=1.06% 00:26:33.025 cpu : usr=0.63%, sys=1.00%, ctx=1618, majf=0, minf=1 00:26:33.025 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:33.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.025 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.025 issued rwts: total=0,3125,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.025 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.025 job9: (groupid=0, jobs=1): err= 0: pid=331333: Sat Dec 14 03:07:47 2024 00:26:33.025 write: IOPS=276, BW=69.2MiB/s (72.6MB/s)(708MiB/10219msec); 0 zone resets 00:26:33.025 slat (usec): min=21, max=68623, avg=2947.67, stdev=7399.76 00:26:33.025 clat (usec): min=795, max=600465, avg=227996.79, stdev=131463.76 00:26:33.025 lat (usec): min=839, max=600508, avg=230944.45, stdev=133501.15 00:26:33.025 clat percentiles (msec): 00:26:33.025 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 42], 20.00th=[ 88], 00:26:33.025 | 30.00th=[ 157], 40.00th=[ 184], 50.00th=[ 232], 60.00th=[ 279], 00:26:33.025 | 70.00th=[ 305], 80.00th=[ 372], 90.00th=[ 401], 95.00th=[ 422], 00:26:33.025 | 99.00th=[ 464], 99.50th=[ 502], 99.90th=[ 575], 99.95th=[ 600], 00:26:33.025 | 99.99th=[ 600] 00:26:33.025 bw ( KiB/s): min=36864, max=167424, per=6.40%, avg=70809.60, stdev=40219.02, samples=20 00:26:33.025 iops : min= 144, max= 654, avg=276.60, stdev=157.11, samples=20 00:26:33.025 lat (usec) : 1000=0.14% 00:26:33.025 lat (msec) : 2=0.67%, 4=1.63%, 10=3.14%, 20=1.84%, 50=4.73% 00:26:33.025 lat (msec) : 100=9.36%, 250=33.39%, 500=44.45%, 750=0.64% 00:26:33.025 cpu : usr=0.65%, sys=0.87%, ctx=1380, majf=0, minf=1 00:26:33.025 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:33.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.025 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.025 issued rwts: total=0,2830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.025 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.025 job10: (groupid=0, jobs=1): err= 0: pid=331338: Sat Dec 14 03:07:47 2024 00:26:33.025 write: IOPS=457, BW=114MiB/s (120MB/s)(1157MiB/10121msec); 0 zone resets 00:26:33.025 slat (usec): min=23, max=62237, avg=1992.46, stdev=4867.78 00:26:33.025 clat (usec): min=1366, max=440830, avg=137966.42, stdev=96705.31 00:26:33.025 lat (msec): min=2, max=440, avg=139.96, stdev=98.08 00:26:33.025 clat percentiles (msec): 00:26:33.025 | 1.00th=[ 8], 5.00th=[ 19], 10.00th=[ 40], 20.00th=[ 80], 00:26:33.025 | 30.00th=[ 86], 40.00th=[ 93], 50.00th=[ 102], 60.00th=[ 115], 00:26:33.025 | 70.00th=[ 136], 80.00th=[ 234], 90.00th=[ 296], 95.00th=[ 342], 00:26:33.025 | 99.00th=[ 409], 99.50th=[ 418], 99.90th=[ 439], 99.95th=[ 443], 00:26:33.025 | 99.99th=[ 443] 00:26:33.025 bw ( KiB/s): min=39936, max=299008, per=10.56%, avg=116787.20, stdev=71762.70, samples=20 00:26:33.025 iops : min= 156, max= 1168, avg=456.20, stdev=280.32, samples=20 00:26:33.025 lat (msec) : 2=0.02%, 4=0.22%, 10=2.27%, 20=2.88%, 50=6.14% 00:26:33.025 lat (msec) : 100=37.92%, 250=32.75%, 500=17.81% 00:26:33.025 cpu : usr=0.97%, sys=1.48%, ctx=1768, majf=0, minf=1 00:26:33.025 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:26:33.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.025 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.025 issued rwts: total=0,4626,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.025 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.025 00:26:33.025 Run status group 0 (all jobs): 00:26:33.025 WRITE: bw=1080MiB/s (1133MB/s), 61.8MiB/s-211MiB/s (64.8MB/s-221MB/s), io=10.8GiB (11.6GB), run=10042-10224msec 00:26:33.025 00:26:33.025 Disk stats (read/write): 00:26:33.025 nvme0n1: ios=55/5017, merge=0/0, ticks=2226/1229218, in_queue=1231444, util=99.88% 00:26:33.025 nvme10n1: ios=50/6515, merge=0/0, ticks=52/1237057, in_queue=1237109, util=97.76% 00:26:33.025 nvme1n1: ios=49/8699, merge=0/0, ticks=33/1217772, in_queue=1217805, util=97.76% 00:26:33.025 nvme2n1: ios=27/6058, merge=0/0, ticks=27/1211297, in_queue=1211324, util=97.83% 00:26:33.025 nvme3n1: ios=51/7851, merge=0/0, ticks=1060/1235123, in_queue=1236183, util=100.00% 00:26:33.025 nvme4n1: ios=48/16875, merge=0/0, ticks=2381/1207982, in_queue=1210363, util=100.00% 00:26:33.025 nvme5n1: ios=0/6538, merge=0/0, ticks=0/1241039, in_queue=1241039, util=98.35% 00:26:33.025 nvme6n1: ios=21/8715, merge=0/0, ticks=230/1239148, in_queue=1239378, util=99.11% 00:26:33.025 nvme7n1: ios=0/6209, merge=0/0, ticks=0/1243905, in_queue=1243905, util=98.81% 00:26:33.025 nvme8n1: ios=42/5621, merge=0/0, ticks=1072/1237259, in_queue=1238331, util=100.00% 00:26:33.025 nvme9n1: ios=0/9064, merge=0/0, ticks=0/1209502, in_queue=1209502, util=99.07% 00:26:33.025 03:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:33.025 03:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:33.025 03:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.025 03:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:33.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:33.025 03:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:33.025 03:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:33.025 03:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:33.025 03:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:33.025 03:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:33.025 03:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:33.025 03:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:33.025 03:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:33.025 03:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.025 03:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.025 03:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.025 03:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.025 03:07:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:33.593 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:33.593 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:33.593 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:33.593 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:33.593 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:33.593 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:33.593 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:33.593 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:33.593 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:33.593 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.593 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.593 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.593 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.593 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:33.852 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:33.852 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:33.852 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:33.852 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:33.852 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:33.852 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:33.852 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:33.852 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:33.852 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:33.852 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.852 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.852 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.852 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.852 03:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:34.111 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:34.111 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:34.111 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:34.111 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:34.111 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:34.111 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:34.111 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:34.111 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:34.111 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:34.111 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.111 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.111 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.111 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.111 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:34.370 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:34.370 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:34.370 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:34.370 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:34.370 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:34.370 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:34.370 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:34.370 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:34.370 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:34.370 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.370 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.370 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.370 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.370 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:34.370 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:34.370 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:34.370 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:34.370 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:34.370 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:34.370 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:34.370 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:34.629 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:34.629 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:34.629 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.629 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.629 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.629 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.629 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:34.888 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:34.888 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:34.888 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:34.888 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:34.888 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:34.888 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:34.888 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:34.888 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:34.888 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:34.888 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.888 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.888 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.888 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.888 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:34.888 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:34.888 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:34.888 03:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:34.888 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:34.888 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:34.888 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:34.888 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:35.147 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:35.147 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:35.147 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.147 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.147 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.147 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:35.147 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:35.147 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:35.147 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:35.147 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:35.147 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:35.147 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:35.147 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:35.147 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:35.147 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:35.147 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:35.147 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.147 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.147 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.147 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:35.147 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:35.147 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:35.405 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:35.405 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:35.406 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:35.406 rmmod nvme_tcp 00:26:35.406 rmmod nvme_fabrics 00:26:35.406 rmmod nvme_keyring 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 330250 ']' 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 330250 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 330250 ']' 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 330250 00:26:35.406 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:35.665 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:35.665 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 330250 00:26:35.665 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:35.665 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:35.665 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 330250' 00:26:35.665 killing process with pid 330250 00:26:35.665 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 330250 00:26:35.665 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 330250 00:26:35.924 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:35.924 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:35.924 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:35.924 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:35.924 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:35.924 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:35.924 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:35.924 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:35.924 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:35.924 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.924 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:35.924 03:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.476 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:38.476 00:26:38.476 real 1m11.120s 00:26:38.476 user 4m16.695s 00:26:38.476 sys 0m17.485s 00:26:38.476 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:38.476 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:38.476 ************************************ 00:26:38.476 END TEST nvmf_multiconnection 00:26:38.476 ************************************ 00:26:38.476 03:07:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:38.476 03:07:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:38.476 03:07:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:38.476 03:07:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:38.476 ************************************ 00:26:38.476 START TEST nvmf_initiator_timeout 00:26:38.476 ************************************ 00:26:38.476 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:38.476 * Looking for test storage... 00:26:38.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:38.476 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:38.476 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:26:38.476 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:38.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.477 --rc genhtml_branch_coverage=1 00:26:38.477 --rc genhtml_function_coverage=1 00:26:38.477 --rc genhtml_legend=1 00:26:38.477 --rc geninfo_all_blocks=1 00:26:38.477 --rc geninfo_unexecuted_blocks=1 00:26:38.477 00:26:38.477 ' 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:38.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.477 --rc genhtml_branch_coverage=1 00:26:38.477 --rc genhtml_function_coverage=1 00:26:38.477 --rc genhtml_legend=1 00:26:38.477 --rc geninfo_all_blocks=1 00:26:38.477 --rc geninfo_unexecuted_blocks=1 00:26:38.477 00:26:38.477 ' 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:38.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.477 --rc genhtml_branch_coverage=1 00:26:38.477 --rc genhtml_function_coverage=1 00:26:38.477 --rc genhtml_legend=1 00:26:38.477 --rc geninfo_all_blocks=1 00:26:38.477 --rc geninfo_unexecuted_blocks=1 00:26:38.477 00:26:38.477 ' 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:38.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.477 --rc genhtml_branch_coverage=1 00:26:38.477 --rc genhtml_function_coverage=1 00:26:38.477 --rc genhtml_legend=1 00:26:38.477 --rc geninfo_all_blocks=1 00:26:38.477 --rc geninfo_unexecuted_blocks=1 00:26:38.477 00:26:38.477 ' 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:38.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:38.477 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:38.478 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:38.478 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:38.478 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.478 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.478 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.478 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:38.478 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:38.478 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:38.478 03:07:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:43.752 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:43.752 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:43.752 Found net devices under 0000:af:00.0: cvl_0_0 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:43.752 Found net devices under 0000:af:00.1: cvl_0_1 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:43.752 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:43.753 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:43.753 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:43.753 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:44.012 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:44.012 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:44.012 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:44.012 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:44.012 03:07:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:44.012 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:44.012 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:44.012 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:44.012 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:44.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:26:44.012 00:26:44.012 --- 10.0.0.2 ping statistics --- 00:26:44.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.012 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:26:44.012 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:44.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:26:44.012 00:26:44.012 --- 10.0.0.1 ping statistics --- 00:26:44.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.012 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:26:44.012 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.012 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:44.012 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:44.012 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.012 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:44.012 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:44.012 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.012 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:44.012 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:44.271 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:44.271 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:44.271 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:44.271 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.271 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=333788 00:26:44.271 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 333788 00:26:44.271 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:44.271 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 333788 ']' 00:26:44.271 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.271 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:44.271 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.271 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:44.271 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.271 [2024-12-14 03:07:59.201695] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:26:44.271 [2024-12-14 03:07:59.201743] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.271 [2024-12-14 03:07:59.281771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:44.271 [2024-12-14 03:07:59.304763] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.271 [2024-12-14 03:07:59.304800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.271 [2024-12-14 03:07:59.304806] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.271 [2024-12-14 03:07:59.304813] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.271 [2024-12-14 03:07:59.304818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.271 [2024-12-14 03:07:59.306240] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.271 [2024-12-14 03:07:59.306373] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:44.271 [2024-12-14 03:07:59.306426] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.271 [2024-12-14 03:07:59.306427] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:44.271 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:44.271 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:44.271 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:44.271 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:44.271 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.531 Malloc0 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.531 Delay0 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.531 [2024-12-14 03:07:59.484856] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.531 [2024-12-14 03:07:59.518133] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.531 03:07:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:45.908 03:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:45.908 03:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:45.908 03:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:45.908 03:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:45.908 03:08:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:47.809 03:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:47.809 03:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:47.809 03:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:47.809 03:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:47.809 03:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:47.809 03:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:47.809 03:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=333856 00:26:47.809 03:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:47.809 03:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:47.809 [global] 00:26:47.809 thread=1 00:26:47.809 invalidate=1 00:26:47.809 rw=write 00:26:47.809 time_based=1 00:26:47.809 runtime=60 00:26:47.809 ioengine=libaio 00:26:47.809 direct=1 00:26:47.809 bs=4096 00:26:47.809 iodepth=1 00:26:47.809 norandommap=0 00:26:47.809 numjobs=1 00:26:47.809 00:26:47.809 verify_dump=1 00:26:47.809 verify_backlog=512 00:26:47.809 verify_state_save=0 00:26:47.809 do_verify=1 00:26:47.809 verify=crc32c-intel 00:26:47.809 [job0] 00:26:47.809 filename=/dev/nvme0n1 00:26:47.809 Could not set queue depth (nvme0n1) 00:26:48.067 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:48.067 fio-3.35 00:26:48.067 Starting 1 thread 00:26:50.600 03:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:50.600 03:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.600 03:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.600 true 00:26:50.600 03:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.600 03:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:50.600 03:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.600 03:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.600 true 00:26:50.600 03:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.600 03:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:50.600 03:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.600 03:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.600 true 00:26:50.600 03:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.600 03:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:50.600 03:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.600 03:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.600 true 00:26:50.600 03:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.600 03:08:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:53.890 03:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:53.890 03:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.890 03:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:53.890 true 00:26:53.890 03:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.890 03:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:53.890 03:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.890 03:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:53.890 true 00:26:53.890 03:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.890 03:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:53.890 03:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.890 03:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:53.890 true 00:26:53.890 03:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.890 03:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:53.890 03:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.890 03:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:53.890 true 00:26:53.890 03:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.890 03:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:53.890 03:08:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 333856 00:27:50.125 00:27:50.125 job0: (groupid=0, jobs=1): err= 0: pid=333976: Sat Dec 14 03:09:03 2024 00:27:50.125 read: IOPS=431, BW=1725KiB/s (1766kB/s)(101MiB/60013msec) 00:27:50.125 slat (usec): min=7, max=11685, avg= 8.84, stdev=75.76 00:27:50.125 clat (usec): min=191, max=41682k, avg=2100.37, stdev=259141.57 00:27:50.125 lat (usec): min=201, max=41682k, avg=2109.22, stdev=259141.69 00:27:50.125 clat percentiles (usec): 00:27:50.125 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 231], 00:27:50.125 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 245], 00:27:50.125 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 269], 00:27:50.125 | 99.00th=[ 474], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:27:50.125 | 99.99th=[41681] 00:27:50.125 write: IOPS=435, BW=1740KiB/s (1782kB/s)(102MiB/60013msec); 0 zone resets 00:27:50.125 slat (usec): min=10, max=40681, avg=14.35, stdev=300.52 00:27:50.125 clat (usec): min=24, max=375, avg=187.75, stdev=23.16 00:27:50.125 lat (usec): min=153, max=41014, avg=202.10, stdev=302.78 00:27:50.125 clat percentiles (usec): 00:27:50.125 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:27:50.125 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:27:50.125 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 223], 00:27:50.125 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 314], 99.95th=[ 318], 00:27:50.125 | 99.99th=[ 330] 00:27:50.125 bw ( KiB/s): min= 3824, max= 9960, per=100.00%, avg=8355.84, stdev=1455.82, samples=25 00:27:50.125 iops : min= 956, max= 2490, avg=2088.96, stdev=363.95, samples=25 00:27:50.125 lat (usec) : 50=0.01%, 250=84.33%, 500=15.34%, 750=0.03%, 1000=0.01% 00:27:50.125 lat (msec) : 50=0.30%, >=2000=0.01% 00:27:50.125 cpu : usr=0.83%, sys=1.29%, ctx=51993, majf=0, minf=36 00:27:50.125 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:50.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.125 issued rwts: total=25875,26112,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.125 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:50.125 00:27:50.125 Run status group 0 (all jobs): 00:27:50.125 READ: bw=1725KiB/s (1766kB/s), 1725KiB/s-1725KiB/s (1766kB/s-1766kB/s), io=101MiB (106MB), run=60013-60013msec 00:27:50.125 WRITE: bw=1740KiB/s (1782kB/s), 1740KiB/s-1740KiB/s (1782kB/s-1782kB/s), io=102MiB (107MB), run=60013-60013msec 00:27:50.125 00:27:50.125 Disk stats (read/write): 00:27:50.125 nvme0n1: ios=25973/26112, merge=0/0, ticks=14671/4610, in_queue=19281, util=99.80% 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:50.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:50.125 nvmf hotplug test: fio successful as expected 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:50.125 rmmod nvme_tcp 00:27:50.125 rmmod nvme_fabrics 00:27:50.125 rmmod nvme_keyring 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 333788 ']' 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 333788 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 333788 ']' 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 333788 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:50.125 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 333788 00:27:50.126 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:50.126 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:50.126 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 333788' 00:27:50.126 killing process with pid 333788 00:27:50.126 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 333788 00:27:50.126 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 333788 00:27:50.126 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:50.126 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:50.126 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:50.126 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:50.126 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:50.126 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:50.126 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:50.126 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:50.126 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:50.126 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.126 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:50.126 03:09:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.694 03:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:50.694 00:27:50.694 real 1m12.515s 00:27:50.694 user 4m22.640s 00:27:50.694 sys 0m7.493s 00:27:50.694 03:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:50.694 03:09:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:50.694 ************************************ 00:27:50.694 END TEST nvmf_initiator_timeout 00:27:50.694 ************************************ 00:27:50.694 03:09:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:50.694 03:09:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:50.694 03:09:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:50.694 03:09:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:50.694 03:09:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:57.271 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:57.271 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:57.271 Found net devices under 0000:af:00.0: cvl_0_0 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:57.271 Found net devices under 0000:af:00.1: cvl_0_1 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:57.271 ************************************ 00:27:57.271 START TEST nvmf_perf_adq 00:27:57.271 ************************************ 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:57.271 * Looking for test storage... 00:27:57.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:57.271 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:57.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.272 --rc genhtml_branch_coverage=1 00:27:57.272 --rc genhtml_function_coverage=1 00:27:57.272 --rc genhtml_legend=1 00:27:57.272 --rc geninfo_all_blocks=1 00:27:57.272 --rc geninfo_unexecuted_blocks=1 00:27:57.272 00:27:57.272 ' 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:57.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.272 --rc genhtml_branch_coverage=1 00:27:57.272 --rc genhtml_function_coverage=1 00:27:57.272 --rc genhtml_legend=1 00:27:57.272 --rc geninfo_all_blocks=1 00:27:57.272 --rc geninfo_unexecuted_blocks=1 00:27:57.272 00:27:57.272 ' 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:57.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.272 --rc genhtml_branch_coverage=1 00:27:57.272 --rc genhtml_function_coverage=1 00:27:57.272 --rc genhtml_legend=1 00:27:57.272 --rc geninfo_all_blocks=1 00:27:57.272 --rc geninfo_unexecuted_blocks=1 00:27:57.272 00:27:57.272 ' 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:57.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.272 --rc genhtml_branch_coverage=1 00:27:57.272 --rc genhtml_function_coverage=1 00:27:57.272 --rc genhtml_legend=1 00:27:57.272 --rc geninfo_all_blocks=1 00:27:57.272 --rc geninfo_unexecuted_blocks=1 00:27:57.272 00:27:57.272 ' 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:57.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:57.272 03:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:02.604 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:02.604 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:02.604 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:02.604 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:02.604 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:02.604 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:02.604 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:02.604 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:02.604 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:02.604 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:02.604 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:02.604 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:02.605 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:02.605 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:02.605 Found net devices under 0000:af:00.0: cvl_0_0 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:02.605 Found net devices under 0000:af:00.1: cvl_0_1 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:02.605 03:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:03.543 03:09:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:07.737 03:09:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:11.933 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:11.933 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:11.933 Found net devices under 0000:af:00.0: cvl_0_0 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:11.933 Found net devices under 0000:af:00.1: cvl_0_1 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:11.933 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:11.934 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:11.934 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:11.934 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:11.934 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:12.193 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:12.193 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:12.193 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:12.194 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:12.194 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:12.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:12.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.839 ms 00:28:12.453 00:28:12.453 --- 10.0.0.2 ping statistics --- 00:28:12.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.453 rtt min/avg/max/mdev = 0.839/0.839/0.839/0.000 ms 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:12.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:12.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:28:12.453 00:28:12.453 --- 10.0.0.1 ping statistics --- 00:28:12.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.453 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=339994 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 339994 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 339994 ']' 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:12.453 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.453 [2024-12-14 03:09:27.481301] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:12.453 [2024-12-14 03:09:27.481355] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.453 [2024-12-14 03:09:27.560025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:12.453 [2024-12-14 03:09:27.582623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.453 [2024-12-14 03:09:27.582660] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.453 [2024-12-14 03:09:27.582670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.453 [2024-12-14 03:09:27.582677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.453 [2024-12-14 03:09:27.582682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.453 [2024-12-14 03:09:27.584023] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.453 [2024-12-14 03:09:27.584130] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:12.453 [2024-12-14 03:09:27.584235] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.453 [2024-12-14 03:09:27.584236] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.712 [2024-12-14 03:09:27.804843] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.712 Malloc1 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.712 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.971 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.971 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:12.971 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.971 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.971 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.971 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:12.971 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.971 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.971 [2024-12-14 03:09:27.860930] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:12.971 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.971 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=340022 00:28:12.971 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:12.971 03:09:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:14.873 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:14.873 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.873 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.873 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.873 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:14.873 "tick_rate": 2100000000, 00:28:14.873 "poll_groups": [ 00:28:14.873 { 00:28:14.873 "name": "nvmf_tgt_poll_group_000", 00:28:14.873 "admin_qpairs": 1, 00:28:14.873 "io_qpairs": 1, 00:28:14.873 "current_admin_qpairs": 1, 00:28:14.873 "current_io_qpairs": 1, 00:28:14.873 "pending_bdev_io": 0, 00:28:14.873 "completed_nvme_io": 19284, 00:28:14.873 "transports": [ 00:28:14.873 { 00:28:14.873 "trtype": "TCP" 00:28:14.873 } 00:28:14.873 ] 00:28:14.873 }, 00:28:14.873 { 00:28:14.873 "name": "nvmf_tgt_poll_group_001", 00:28:14.873 "admin_qpairs": 0, 00:28:14.873 "io_qpairs": 1, 00:28:14.873 "current_admin_qpairs": 0, 00:28:14.873 "current_io_qpairs": 1, 00:28:14.873 "pending_bdev_io": 0, 00:28:14.873 "completed_nvme_io": 19494, 00:28:14.873 "transports": [ 00:28:14.873 { 00:28:14.873 "trtype": "TCP" 00:28:14.873 } 00:28:14.873 ] 00:28:14.873 }, 00:28:14.873 { 00:28:14.873 "name": "nvmf_tgt_poll_group_002", 00:28:14.873 "admin_qpairs": 0, 00:28:14.873 "io_qpairs": 1, 00:28:14.873 "current_admin_qpairs": 0, 00:28:14.873 "current_io_qpairs": 1, 00:28:14.873 "pending_bdev_io": 0, 00:28:14.873 "completed_nvme_io": 19627, 00:28:14.873 "transports": [ 00:28:14.873 { 00:28:14.873 "trtype": "TCP" 00:28:14.873 } 00:28:14.873 ] 00:28:14.873 }, 00:28:14.873 { 00:28:14.873 "name": "nvmf_tgt_poll_group_003", 00:28:14.873 "admin_qpairs": 0, 00:28:14.873 "io_qpairs": 1, 00:28:14.873 "current_admin_qpairs": 0, 00:28:14.873 "current_io_qpairs": 1, 00:28:14.873 "pending_bdev_io": 0, 00:28:14.873 "completed_nvme_io": 19353, 00:28:14.873 "transports": [ 00:28:14.873 { 00:28:14.873 "trtype": "TCP" 00:28:14.873 } 00:28:14.873 ] 00:28:14.873 } 00:28:14.873 ] 00:28:14.873 }' 00:28:14.873 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:14.873 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:14.873 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:14.873 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:14.873 03:09:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 340022 00:28:23.102 Initializing NVMe Controllers 00:28:23.102 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:23.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:23.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:23.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:23.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:23.102 Initialization complete. Launching workers. 00:28:23.102 ======================================================== 00:28:23.102 Latency(us) 00:28:23.102 Device Information : IOPS MiB/s Average min max 00:28:23.102 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10234.90 39.98 6253.25 1829.57 10651.88 00:28:23.102 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10375.80 40.53 6168.46 1642.89 10878.50 00:28:23.102 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10470.50 40.90 6113.05 2089.84 10639.17 00:28:23.102 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10298.50 40.23 6215.25 1813.37 10735.14 00:28:23.102 ======================================================== 00:28:23.102 Total : 41379.68 161.64 6187.06 1642.89 10878.50 00:28:23.102 00:28:23.102 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:23.102 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:23.102 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:23.102 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:23.102 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:23.102 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:23.102 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:23.102 rmmod nvme_tcp 00:28:23.102 rmmod nvme_fabrics 00:28:23.102 rmmod nvme_keyring 00:28:23.102 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:23.102 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:23.102 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:23.102 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 339994 ']' 00:28:23.102 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 339994 00:28:23.102 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 339994 ']' 00:28:23.102 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 339994 00:28:23.102 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:23.102 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:23.102 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 339994 00:28:23.102 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:23.102 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:23.102 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 339994' 00:28:23.102 killing process with pid 339994 00:28:23.102 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 339994 00:28:23.102 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 339994 00:28:23.405 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:23.405 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:23.405 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:23.405 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:23.405 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:23.405 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:23.405 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:23.405 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.405 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:23.405 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.405 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.405 03:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.539 03:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:25.539 03:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:25.539 03:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:25.539 03:09:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:26.475 03:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:29.008 03:09:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:34.281 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:34.282 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:34.282 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:34.282 Found net devices under 0000:af:00.0: cvl_0_0 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:34.282 Found net devices under 0000:af:00.1: cvl_0_1 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:34.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:34.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.747 ms 00:28:34.282 00:28:34.282 --- 10.0.0.2 ping statistics --- 00:28:34.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.282 rtt min/avg/max/mdev = 0.747/0.747/0.747/0.000 ms 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:34.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:34.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:28:34.282 00:28:34.282 --- 10.0.0.1 ping statistics --- 00:28:34.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.282 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:28:34.282 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:34.283 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:34.283 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:34.283 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:34.283 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:34.283 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:34.283 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:34.283 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:34.283 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:34.542 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:34.542 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:34.542 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:34.542 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:34.542 net.core.busy_poll = 1 00:28:34.542 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:34.542 net.core.busy_read = 1 00:28:34.542 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:34.542 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:34.542 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:34.542 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:34.542 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:34.801 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:34.801 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:34.801 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:34.801 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.801 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=340776 00:28:34.801 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 340776 00:28:34.801 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:34.801 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 340776 ']' 00:28:34.801 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.801 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:34.801 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.801 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.801 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.801 [2024-12-14 03:09:49.756792] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:34.801 [2024-12-14 03:09:49.756846] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.801 [2024-12-14 03:09:49.837164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:34.801 [2024-12-14 03:09:49.859690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.801 [2024-12-14 03:09:49.859730] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.801 [2024-12-14 03:09:49.859738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.801 [2024-12-14 03:09:49.859744] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.801 [2024-12-14 03:09:49.859750] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.801 [2024-12-14 03:09:49.861069] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.801 [2024-12-14 03:09:49.861177] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:34.801 [2024-12-14 03:09:49.861262] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.801 [2024-12-14 03:09:49.861262] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:34.801 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:34.801 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:34.801 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:34.801 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:34.801 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.060 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.060 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:35.060 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:35.060 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:35.060 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.060 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.060 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.060 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:35.060 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:35.060 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.060 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.060 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.060 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:35.060 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.060 03:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.060 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.060 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:35.060 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.060 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.060 [2024-12-14 03:09:50.079610] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.060 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.060 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:35.060 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.060 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.060 Malloc1 00:28:35.060 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.060 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:35.060 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.060 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.060 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.060 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:35.060 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.060 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.060 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.060 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:35.060 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.060 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.060 [2024-12-14 03:09:50.138829] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.060 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.061 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=340812 00:28:35.061 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:35.061 03:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:37.588 03:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:37.588 03:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.588 03:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:37.588 03:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.588 03:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:37.588 "tick_rate": 2100000000, 00:28:37.589 "poll_groups": [ 00:28:37.589 { 00:28:37.589 "name": "nvmf_tgt_poll_group_000", 00:28:37.589 "admin_qpairs": 1, 00:28:37.589 "io_qpairs": 3, 00:28:37.589 "current_admin_qpairs": 1, 00:28:37.589 "current_io_qpairs": 3, 00:28:37.589 "pending_bdev_io": 0, 00:28:37.589 "completed_nvme_io": 29950, 00:28:37.589 "transports": [ 00:28:37.589 { 00:28:37.589 "trtype": "TCP" 00:28:37.589 } 00:28:37.589 ] 00:28:37.589 }, 00:28:37.589 { 00:28:37.589 "name": "nvmf_tgt_poll_group_001", 00:28:37.589 "admin_qpairs": 0, 00:28:37.589 "io_qpairs": 1, 00:28:37.589 "current_admin_qpairs": 0, 00:28:37.589 "current_io_qpairs": 1, 00:28:37.589 "pending_bdev_io": 0, 00:28:37.589 "completed_nvme_io": 25934, 00:28:37.589 "transports": [ 00:28:37.589 { 00:28:37.589 "trtype": "TCP" 00:28:37.589 } 00:28:37.589 ] 00:28:37.589 }, 00:28:37.589 { 00:28:37.589 "name": "nvmf_tgt_poll_group_002", 00:28:37.589 "admin_qpairs": 0, 00:28:37.589 "io_qpairs": 0, 00:28:37.589 "current_admin_qpairs": 0, 00:28:37.589 "current_io_qpairs": 0, 00:28:37.589 "pending_bdev_io": 0, 00:28:37.589 "completed_nvme_io": 0, 00:28:37.589 "transports": [ 00:28:37.589 { 00:28:37.589 "trtype": "TCP" 00:28:37.589 } 00:28:37.589 ] 00:28:37.589 }, 00:28:37.589 { 00:28:37.589 "name": "nvmf_tgt_poll_group_003", 00:28:37.589 "admin_qpairs": 0, 00:28:37.589 "io_qpairs": 0, 00:28:37.589 "current_admin_qpairs": 0, 00:28:37.589 "current_io_qpairs": 0, 00:28:37.589 "pending_bdev_io": 0, 00:28:37.589 "completed_nvme_io": 0, 00:28:37.589 "transports": [ 00:28:37.589 { 00:28:37.589 "trtype": "TCP" 00:28:37.589 } 00:28:37.589 ] 00:28:37.589 } 00:28:37.589 ] 00:28:37.589 }' 00:28:37.589 03:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:37.589 03:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:37.589 03:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:37.589 03:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:37.589 03:09:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 340812 00:28:45.700 Initializing NVMe Controllers 00:28:45.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:45.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:45.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:45.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:45.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:45.700 Initialization complete. Launching workers. 00:28:45.700 ======================================================== 00:28:45.700 Latency(us) 00:28:45.700 Device Information : IOPS MiB/s Average min max 00:28:45.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5639.20 22.03 11369.92 1197.52 58586.84 00:28:45.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5002.03 19.54 12796.10 1749.02 58783.75 00:28:45.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5230.32 20.43 12264.44 1108.97 56215.58 00:28:45.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13695.43 53.50 4682.53 1561.49 46576.85 00:28:45.700 ======================================================== 00:28:45.700 Total : 29566.98 115.50 8671.84 1108.97 58783.75 00:28:45.700 00:28:45.700 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:45.700 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:45.700 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:45.700 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:45.700 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:45.700 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:45.700 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:45.700 rmmod nvme_tcp 00:28:45.700 rmmod nvme_fabrics 00:28:45.700 rmmod nvme_keyring 00:28:45.700 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 340776 ']' 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 340776 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 340776 ']' 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 340776 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 340776 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 340776' 00:28:45.701 killing process with pid 340776 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 340776 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 340776 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:45.701 03:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:48.990 00:28:48.990 real 0m52.431s 00:28:48.990 user 2m44.231s 00:28:48.990 sys 0m11.192s 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.990 ************************************ 00:28:48.990 END TEST nvmf_perf_adq 00:28:48.990 ************************************ 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:48.990 ************************************ 00:28:48.990 START TEST nvmf_shutdown 00:28:48.990 ************************************ 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:48.990 * Looking for test storage... 00:28:48.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:48.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.990 --rc genhtml_branch_coverage=1 00:28:48.990 --rc genhtml_function_coverage=1 00:28:48.990 --rc genhtml_legend=1 00:28:48.990 --rc geninfo_all_blocks=1 00:28:48.990 --rc geninfo_unexecuted_blocks=1 00:28:48.990 00:28:48.990 ' 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:48.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.990 --rc genhtml_branch_coverage=1 00:28:48.990 --rc genhtml_function_coverage=1 00:28:48.990 --rc genhtml_legend=1 00:28:48.990 --rc geninfo_all_blocks=1 00:28:48.990 --rc geninfo_unexecuted_blocks=1 00:28:48.990 00:28:48.990 ' 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:48.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.990 --rc genhtml_branch_coverage=1 00:28:48.990 --rc genhtml_function_coverage=1 00:28:48.990 --rc genhtml_legend=1 00:28:48.990 --rc geninfo_all_blocks=1 00:28:48.990 --rc geninfo_unexecuted_blocks=1 00:28:48.990 00:28:48.990 ' 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:48.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.990 --rc genhtml_branch_coverage=1 00:28:48.990 --rc genhtml_function_coverage=1 00:28:48.990 --rc genhtml_legend=1 00:28:48.990 --rc geninfo_all_blocks=1 00:28:48.990 --rc geninfo_unexecuted_blocks=1 00:28:48.990 00:28:48.990 ' 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.990 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.991 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.991 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.991 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.991 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.991 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.991 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.991 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.991 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.991 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:48.991 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:48.991 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.991 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.991 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:48.991 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:48.991 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:48.991 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:48.991 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.991 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.991 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.991 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.991 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.991 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.991 03:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:48.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:48.991 ************************************ 00:28:48.991 START TEST nvmf_shutdown_tc1 00:28:48.991 ************************************ 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:48.991 03:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.562 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:55.562 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:55.562 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:55.562 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:55.562 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:55.562 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:55.562 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:55.562 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:55.562 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:55.562 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:55.562 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:55.562 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:55.562 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:55.562 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:55.562 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:55.563 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:55.563 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:55.563 Found net devices under 0000:af:00.0: cvl_0_0 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:55.563 Found net devices under 0000:af:00.1: cvl_0_1 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:55.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:55.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:28:55.563 00:28:55.563 --- 10.0.0.2 ping statistics --- 00:28:55.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.563 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:28:55.563 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:55.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:55.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:28:55.564 00:28:55.564 --- 10.0.0.1 ping statistics --- 00:28:55.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.564 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:28:55.564 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:55.564 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:55.564 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:55.564 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:55.564 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:55.564 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:55.564 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:55.564 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:55.564 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:55.564 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:55.564 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:55.564 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:55.564 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.564 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=343278 00:28:55.564 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:55.564 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 343278 00:28:55.564 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 343278 ']' 00:28:55.564 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.564 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:55.564 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.564 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:55.564 03:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.564 [2024-12-14 03:10:10.016864] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:55.564 [2024-12-14 03:10:10.016910] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.564 [2024-12-14 03:10:10.083095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:55.564 [2024-12-14 03:10:10.106824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.564 [2024-12-14 03:10:10.106862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.564 [2024-12-14 03:10:10.106871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.564 [2024-12-14 03:10:10.106877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.564 [2024-12-14 03:10:10.106882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.564 [2024-12-14 03:10:10.108355] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.564 [2024-12-14 03:10:10.108461] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:55.564 [2024-12-14 03:10:10.108570] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.564 [2024-12-14 03:10:10.108571] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.564 [2024-12-14 03:10:10.252062] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.564 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.564 Malloc1 00:28:55.564 [2024-12-14 03:10:10.368386] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.564 Malloc2 00:28:55.564 Malloc3 00:28:55.564 Malloc4 00:28:55.564 Malloc5 00:28:55.564 Malloc6 00:28:55.564 Malloc7 00:28:55.564 Malloc8 00:28:55.824 Malloc9 00:28:55.824 Malloc10 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=343340 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 343340 /var/tmp/bdevperf.sock 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 343340 ']' 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:55.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.824 { 00:28:55.824 "params": { 00:28:55.824 "name": "Nvme$subsystem", 00:28:55.824 "trtype": "$TEST_TRANSPORT", 00:28:55.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.824 "adrfam": "ipv4", 00:28:55.824 "trsvcid": "$NVMF_PORT", 00:28:55.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.824 "hdgst": ${hdgst:-false}, 00:28:55.824 "ddgst": ${ddgst:-false} 00:28:55.824 }, 00:28:55.824 "method": "bdev_nvme_attach_controller" 00:28:55.824 } 00:28:55.824 EOF 00:28:55.824 )") 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.824 { 00:28:55.824 "params": { 00:28:55.824 "name": "Nvme$subsystem", 00:28:55.824 "trtype": "$TEST_TRANSPORT", 00:28:55.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.824 "adrfam": "ipv4", 00:28:55.824 "trsvcid": "$NVMF_PORT", 00:28:55.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.824 "hdgst": ${hdgst:-false}, 00:28:55.824 "ddgst": ${ddgst:-false} 00:28:55.824 }, 00:28:55.824 "method": "bdev_nvme_attach_controller" 00:28:55.824 } 00:28:55.824 EOF 00:28:55.824 )") 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.824 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.824 { 00:28:55.824 "params": { 00:28:55.824 "name": "Nvme$subsystem", 00:28:55.824 "trtype": "$TEST_TRANSPORT", 00:28:55.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.824 "adrfam": "ipv4", 00:28:55.824 "trsvcid": "$NVMF_PORT", 00:28:55.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.824 "hdgst": ${hdgst:-false}, 00:28:55.824 "ddgst": ${ddgst:-false} 00:28:55.825 }, 00:28:55.825 "method": "bdev_nvme_attach_controller" 00:28:55.825 } 00:28:55.825 EOF 00:28:55.825 )") 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.825 { 00:28:55.825 "params": { 00:28:55.825 "name": "Nvme$subsystem", 00:28:55.825 "trtype": "$TEST_TRANSPORT", 00:28:55.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.825 "adrfam": "ipv4", 00:28:55.825 "trsvcid": "$NVMF_PORT", 00:28:55.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.825 "hdgst": ${hdgst:-false}, 00:28:55.825 "ddgst": ${ddgst:-false} 00:28:55.825 }, 00:28:55.825 "method": "bdev_nvme_attach_controller" 00:28:55.825 } 00:28:55.825 EOF 00:28:55.825 )") 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.825 { 00:28:55.825 "params": { 00:28:55.825 "name": "Nvme$subsystem", 00:28:55.825 "trtype": "$TEST_TRANSPORT", 00:28:55.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.825 "adrfam": "ipv4", 00:28:55.825 "trsvcid": "$NVMF_PORT", 00:28:55.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.825 "hdgst": ${hdgst:-false}, 00:28:55.825 "ddgst": ${ddgst:-false} 00:28:55.825 }, 00:28:55.825 "method": "bdev_nvme_attach_controller" 00:28:55.825 } 00:28:55.825 EOF 00:28:55.825 )") 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.825 { 00:28:55.825 "params": { 00:28:55.825 "name": "Nvme$subsystem", 00:28:55.825 "trtype": "$TEST_TRANSPORT", 00:28:55.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.825 "adrfam": "ipv4", 00:28:55.825 "trsvcid": "$NVMF_PORT", 00:28:55.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.825 "hdgst": ${hdgst:-false}, 00:28:55.825 "ddgst": ${ddgst:-false} 00:28:55.825 }, 00:28:55.825 "method": "bdev_nvme_attach_controller" 00:28:55.825 } 00:28:55.825 EOF 00:28:55.825 )") 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:55.825 [2024-12-14 03:10:10.842589] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:55.825 [2024-12-14 03:10:10.842633] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.825 { 00:28:55.825 "params": { 00:28:55.825 "name": "Nvme$subsystem", 00:28:55.825 "trtype": "$TEST_TRANSPORT", 00:28:55.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.825 "adrfam": "ipv4", 00:28:55.825 "trsvcid": "$NVMF_PORT", 00:28:55.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.825 "hdgst": ${hdgst:-false}, 00:28:55.825 "ddgst": ${ddgst:-false} 00:28:55.825 }, 00:28:55.825 "method": "bdev_nvme_attach_controller" 00:28:55.825 } 00:28:55.825 EOF 00:28:55.825 )") 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.825 { 00:28:55.825 "params": { 00:28:55.825 "name": "Nvme$subsystem", 00:28:55.825 "trtype": "$TEST_TRANSPORT", 00:28:55.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.825 "adrfam": "ipv4", 00:28:55.825 "trsvcid": "$NVMF_PORT", 00:28:55.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.825 "hdgst": ${hdgst:-false}, 00:28:55.825 "ddgst": ${ddgst:-false} 00:28:55.825 }, 00:28:55.825 "method": "bdev_nvme_attach_controller" 00:28:55.825 } 00:28:55.825 EOF 00:28:55.825 )") 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.825 { 00:28:55.825 "params": { 00:28:55.825 "name": "Nvme$subsystem", 00:28:55.825 "trtype": "$TEST_TRANSPORT", 00:28:55.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.825 "adrfam": "ipv4", 00:28:55.825 "trsvcid": "$NVMF_PORT", 00:28:55.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.825 "hdgst": ${hdgst:-false}, 00:28:55.825 "ddgst": ${ddgst:-false} 00:28:55.825 }, 00:28:55.825 "method": "bdev_nvme_attach_controller" 00:28:55.825 } 00:28:55.825 EOF 00:28:55.825 )") 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.825 { 00:28:55.825 "params": { 00:28:55.825 "name": "Nvme$subsystem", 00:28:55.825 "trtype": "$TEST_TRANSPORT", 00:28:55.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.825 "adrfam": "ipv4", 00:28:55.825 "trsvcid": "$NVMF_PORT", 00:28:55.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.825 "hdgst": ${hdgst:-false}, 00:28:55.825 "ddgst": ${ddgst:-false} 00:28:55.825 }, 00:28:55.825 "method": "bdev_nvme_attach_controller" 00:28:55.825 } 00:28:55.825 EOF 00:28:55.825 )") 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:55.825 03:10:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:55.825 "params": { 00:28:55.825 "name": "Nvme1", 00:28:55.825 "trtype": "tcp", 00:28:55.825 "traddr": "10.0.0.2", 00:28:55.825 "adrfam": "ipv4", 00:28:55.825 "trsvcid": "4420", 00:28:55.825 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.825 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:55.825 "hdgst": false, 00:28:55.825 "ddgst": false 00:28:55.825 }, 00:28:55.825 "method": "bdev_nvme_attach_controller" 00:28:55.825 },{ 00:28:55.825 "params": { 00:28:55.825 "name": "Nvme2", 00:28:55.825 "trtype": "tcp", 00:28:55.825 "traddr": "10.0.0.2", 00:28:55.825 "adrfam": "ipv4", 00:28:55.825 "trsvcid": "4420", 00:28:55.825 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:55.825 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:55.825 "hdgst": false, 00:28:55.825 "ddgst": false 00:28:55.825 }, 00:28:55.825 "method": "bdev_nvme_attach_controller" 00:28:55.825 },{ 00:28:55.825 "params": { 00:28:55.825 "name": "Nvme3", 00:28:55.825 "trtype": "tcp", 00:28:55.825 "traddr": "10.0.0.2", 00:28:55.825 "adrfam": "ipv4", 00:28:55.825 "trsvcid": "4420", 00:28:55.825 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:55.825 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:55.825 "hdgst": false, 00:28:55.825 "ddgst": false 00:28:55.826 }, 00:28:55.826 "method": "bdev_nvme_attach_controller" 00:28:55.826 },{ 00:28:55.826 "params": { 00:28:55.826 "name": "Nvme4", 00:28:55.826 "trtype": "tcp", 00:28:55.826 "traddr": "10.0.0.2", 00:28:55.826 "adrfam": "ipv4", 00:28:55.826 "trsvcid": "4420", 00:28:55.826 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:55.826 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:55.826 "hdgst": false, 00:28:55.826 "ddgst": false 00:28:55.826 }, 00:28:55.826 "method": "bdev_nvme_attach_controller" 00:28:55.826 },{ 00:28:55.826 "params": { 00:28:55.826 "name": "Nvme5", 00:28:55.826 "trtype": "tcp", 00:28:55.826 "traddr": "10.0.0.2", 00:28:55.826 "adrfam": "ipv4", 00:28:55.826 "trsvcid": "4420", 00:28:55.826 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:55.826 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:55.826 "hdgst": false, 00:28:55.826 "ddgst": false 00:28:55.826 }, 00:28:55.826 "method": "bdev_nvme_attach_controller" 00:28:55.826 },{ 00:28:55.826 "params": { 00:28:55.826 "name": "Nvme6", 00:28:55.826 "trtype": "tcp", 00:28:55.826 "traddr": "10.0.0.2", 00:28:55.826 "adrfam": "ipv4", 00:28:55.826 "trsvcid": "4420", 00:28:55.826 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:55.826 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:55.826 "hdgst": false, 00:28:55.826 "ddgst": false 00:28:55.826 }, 00:28:55.826 "method": "bdev_nvme_attach_controller" 00:28:55.826 },{ 00:28:55.826 "params": { 00:28:55.826 "name": "Nvme7", 00:28:55.826 "trtype": "tcp", 00:28:55.826 "traddr": "10.0.0.2", 00:28:55.826 "adrfam": "ipv4", 00:28:55.826 "trsvcid": "4420", 00:28:55.826 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:55.826 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:55.826 "hdgst": false, 00:28:55.826 "ddgst": false 00:28:55.826 }, 00:28:55.826 "method": "bdev_nvme_attach_controller" 00:28:55.826 },{ 00:28:55.826 "params": { 00:28:55.826 "name": "Nvme8", 00:28:55.826 "trtype": "tcp", 00:28:55.826 "traddr": "10.0.0.2", 00:28:55.826 "adrfam": "ipv4", 00:28:55.826 "trsvcid": "4420", 00:28:55.826 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:55.826 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:55.826 "hdgst": false, 00:28:55.826 "ddgst": false 00:28:55.826 }, 00:28:55.826 "method": "bdev_nvme_attach_controller" 00:28:55.826 },{ 00:28:55.826 "params": { 00:28:55.826 "name": "Nvme9", 00:28:55.826 "trtype": "tcp", 00:28:55.826 "traddr": "10.0.0.2", 00:28:55.826 "adrfam": "ipv4", 00:28:55.826 "trsvcid": "4420", 00:28:55.826 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:55.826 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:55.826 "hdgst": false, 00:28:55.826 "ddgst": false 00:28:55.826 }, 00:28:55.826 "method": "bdev_nvme_attach_controller" 00:28:55.826 },{ 00:28:55.826 "params": { 00:28:55.826 "name": "Nvme10", 00:28:55.826 "trtype": "tcp", 00:28:55.826 "traddr": "10.0.0.2", 00:28:55.826 "adrfam": "ipv4", 00:28:55.826 "trsvcid": "4420", 00:28:55.826 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:55.826 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:55.826 "hdgst": false, 00:28:55.826 "ddgst": false 00:28:55.826 }, 00:28:55.826 "method": "bdev_nvme_attach_controller" 00:28:55.826 }' 00:28:55.826 [2024-12-14 03:10:10.917641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.826 [2024-12-14 03:10:10.939611] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.730 03:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:57.730 03:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:57.730 03:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:57.730 03:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.730 03:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:57.730 03:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.730 03:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 343340 00:28:57.730 03:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:57.730 03:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:58.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 343340 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 343278 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.668 { 00:28:58.668 "params": { 00:28:58.668 "name": "Nvme$subsystem", 00:28:58.668 "trtype": "$TEST_TRANSPORT", 00:28:58.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.668 "adrfam": "ipv4", 00:28:58.668 "trsvcid": "$NVMF_PORT", 00:28:58.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.668 "hdgst": ${hdgst:-false}, 00:28:58.668 "ddgst": ${ddgst:-false} 00:28:58.668 }, 00:28:58.668 "method": "bdev_nvme_attach_controller" 00:28:58.668 } 00:28:58.668 EOF 00:28:58.668 )") 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.668 { 00:28:58.668 "params": { 00:28:58.668 "name": "Nvme$subsystem", 00:28:58.668 "trtype": "$TEST_TRANSPORT", 00:28:58.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.668 "adrfam": "ipv4", 00:28:58.668 "trsvcid": "$NVMF_PORT", 00:28:58.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.668 "hdgst": ${hdgst:-false}, 00:28:58.668 "ddgst": ${ddgst:-false} 00:28:58.668 }, 00:28:58.668 "method": "bdev_nvme_attach_controller" 00:28:58.668 } 00:28:58.668 EOF 00:28:58.668 )") 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.668 { 00:28:58.668 "params": { 00:28:58.668 "name": "Nvme$subsystem", 00:28:58.668 "trtype": "$TEST_TRANSPORT", 00:28:58.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.668 "adrfam": "ipv4", 00:28:58.668 "trsvcid": "$NVMF_PORT", 00:28:58.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.668 "hdgst": ${hdgst:-false}, 00:28:58.668 "ddgst": ${ddgst:-false} 00:28:58.668 }, 00:28:58.668 "method": "bdev_nvme_attach_controller" 00:28:58.668 } 00:28:58.668 EOF 00:28:58.668 )") 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.668 { 00:28:58.668 "params": { 00:28:58.668 "name": "Nvme$subsystem", 00:28:58.668 "trtype": "$TEST_TRANSPORT", 00:28:58.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.668 "adrfam": "ipv4", 00:28:58.668 "trsvcid": "$NVMF_PORT", 00:28:58.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.668 "hdgst": ${hdgst:-false}, 00:28:58.668 "ddgst": ${ddgst:-false} 00:28:58.668 }, 00:28:58.668 "method": "bdev_nvme_attach_controller" 00:28:58.668 } 00:28:58.668 EOF 00:28:58.668 )") 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.668 { 00:28:58.668 "params": { 00:28:58.668 "name": "Nvme$subsystem", 00:28:58.668 "trtype": "$TEST_TRANSPORT", 00:28:58.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.668 "adrfam": "ipv4", 00:28:58.668 "trsvcid": "$NVMF_PORT", 00:28:58.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.668 "hdgst": ${hdgst:-false}, 00:28:58.668 "ddgst": ${ddgst:-false} 00:28:58.668 }, 00:28:58.668 "method": "bdev_nvme_attach_controller" 00:28:58.668 } 00:28:58.668 EOF 00:28:58.668 )") 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.668 { 00:28:58.668 "params": { 00:28:58.668 "name": "Nvme$subsystem", 00:28:58.668 "trtype": "$TEST_TRANSPORT", 00:28:58.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.668 "adrfam": "ipv4", 00:28:58.668 "trsvcid": "$NVMF_PORT", 00:28:58.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.668 "hdgst": ${hdgst:-false}, 00:28:58.668 "ddgst": ${ddgst:-false} 00:28:58.668 }, 00:28:58.668 "method": "bdev_nvme_attach_controller" 00:28:58.668 } 00:28:58.668 EOF 00:28:58.668 )") 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.668 { 00:28:58.668 "params": { 00:28:58.668 "name": "Nvme$subsystem", 00:28:58.668 "trtype": "$TEST_TRANSPORT", 00:28:58.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.668 "adrfam": "ipv4", 00:28:58.668 "trsvcid": "$NVMF_PORT", 00:28:58.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.668 "hdgst": ${hdgst:-false}, 00:28:58.668 "ddgst": ${ddgst:-false} 00:28:58.668 }, 00:28:58.668 "method": "bdev_nvme_attach_controller" 00:28:58.668 } 00:28:58.668 EOF 00:28:58.668 )") 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:58.668 [2024-12-14 03:10:13.770636] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:58.668 [2024-12-14 03:10:13.770687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid343403 ] 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.668 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.668 { 00:28:58.668 "params": { 00:28:58.668 "name": "Nvme$subsystem", 00:28:58.668 "trtype": "$TEST_TRANSPORT", 00:28:58.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.668 "adrfam": "ipv4", 00:28:58.668 "trsvcid": "$NVMF_PORT", 00:28:58.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.668 "hdgst": ${hdgst:-false}, 00:28:58.668 "ddgst": ${ddgst:-false} 00:28:58.668 }, 00:28:58.668 "method": "bdev_nvme_attach_controller" 00:28:58.668 } 00:28:58.669 EOF 00:28:58.669 )") 00:28:58.669 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:58.669 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.669 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.669 { 00:28:58.669 "params": { 00:28:58.669 "name": "Nvme$subsystem", 00:28:58.669 "trtype": "$TEST_TRANSPORT", 00:28:58.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.669 "adrfam": "ipv4", 00:28:58.669 "trsvcid": "$NVMF_PORT", 00:28:58.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.669 "hdgst": ${hdgst:-false}, 00:28:58.669 "ddgst": ${ddgst:-false} 00:28:58.669 }, 00:28:58.669 "method": "bdev_nvme_attach_controller" 00:28:58.669 } 00:28:58.669 EOF 00:28:58.669 )") 00:28:58.669 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:58.669 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.669 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.669 { 00:28:58.669 "params": { 00:28:58.669 "name": "Nvme$subsystem", 00:28:58.669 "trtype": "$TEST_TRANSPORT", 00:28:58.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.669 "adrfam": "ipv4", 00:28:58.669 "trsvcid": "$NVMF_PORT", 00:28:58.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.669 "hdgst": ${hdgst:-false}, 00:28:58.669 "ddgst": ${ddgst:-false} 00:28:58.669 }, 00:28:58.669 "method": "bdev_nvme_attach_controller" 00:28:58.669 } 00:28:58.669 EOF 00:28:58.669 )") 00:28:58.669 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:58.669 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:58.669 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:58.669 03:10:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:58.669 "params": { 00:28:58.669 "name": "Nvme1", 00:28:58.669 "trtype": "tcp", 00:28:58.669 "traddr": "10.0.0.2", 00:28:58.669 "adrfam": "ipv4", 00:28:58.669 "trsvcid": "4420", 00:28:58.669 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:58.669 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:58.669 "hdgst": false, 00:28:58.669 "ddgst": false 00:28:58.669 }, 00:28:58.669 "method": "bdev_nvme_attach_controller" 00:28:58.669 },{ 00:28:58.669 "params": { 00:28:58.669 "name": "Nvme2", 00:28:58.669 "trtype": "tcp", 00:28:58.669 "traddr": "10.0.0.2", 00:28:58.669 "adrfam": "ipv4", 00:28:58.669 "trsvcid": "4420", 00:28:58.669 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:58.669 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:58.669 "hdgst": false, 00:28:58.669 "ddgst": false 00:28:58.669 }, 00:28:58.669 "method": "bdev_nvme_attach_controller" 00:28:58.669 },{ 00:28:58.669 "params": { 00:28:58.669 "name": "Nvme3", 00:28:58.669 "trtype": "tcp", 00:28:58.669 "traddr": "10.0.0.2", 00:28:58.669 "adrfam": "ipv4", 00:28:58.669 "trsvcid": "4420", 00:28:58.669 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:58.669 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:58.669 "hdgst": false, 00:28:58.669 "ddgst": false 00:28:58.669 }, 00:28:58.669 "method": "bdev_nvme_attach_controller" 00:28:58.669 },{ 00:28:58.669 "params": { 00:28:58.669 "name": "Nvme4", 00:28:58.669 "trtype": "tcp", 00:28:58.669 "traddr": "10.0.0.2", 00:28:58.669 "adrfam": "ipv4", 00:28:58.669 "trsvcid": "4420", 00:28:58.669 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:58.669 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:58.669 "hdgst": false, 00:28:58.669 "ddgst": false 00:28:58.669 }, 00:28:58.669 "method": "bdev_nvme_attach_controller" 00:28:58.669 },{ 00:28:58.669 "params": { 00:28:58.669 "name": "Nvme5", 00:28:58.669 "trtype": "tcp", 00:28:58.669 "traddr": "10.0.0.2", 00:28:58.669 "adrfam": "ipv4", 00:28:58.669 "trsvcid": "4420", 00:28:58.669 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:58.669 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:58.669 "hdgst": false, 00:28:58.669 "ddgst": false 00:28:58.669 }, 00:28:58.669 "method": "bdev_nvme_attach_controller" 00:28:58.669 },{ 00:28:58.669 "params": { 00:28:58.669 "name": "Nvme6", 00:28:58.669 "trtype": "tcp", 00:28:58.669 "traddr": "10.0.0.2", 00:28:58.669 "adrfam": "ipv4", 00:28:58.669 "trsvcid": "4420", 00:28:58.669 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:58.669 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:58.669 "hdgst": false, 00:28:58.669 "ddgst": false 00:28:58.669 }, 00:28:58.669 "method": "bdev_nvme_attach_controller" 00:28:58.669 },{ 00:28:58.669 "params": { 00:28:58.669 "name": "Nvme7", 00:28:58.669 "trtype": "tcp", 00:28:58.669 "traddr": "10.0.0.2", 00:28:58.669 "adrfam": "ipv4", 00:28:58.669 "trsvcid": "4420", 00:28:58.669 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:58.669 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:58.669 "hdgst": false, 00:28:58.669 "ddgst": false 00:28:58.669 }, 00:28:58.669 "method": "bdev_nvme_attach_controller" 00:28:58.669 },{ 00:28:58.669 "params": { 00:28:58.669 "name": "Nvme8", 00:28:58.669 "trtype": "tcp", 00:28:58.669 "traddr": "10.0.0.2", 00:28:58.669 "adrfam": "ipv4", 00:28:58.669 "trsvcid": "4420", 00:28:58.669 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:58.669 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:58.669 "hdgst": false, 00:28:58.669 "ddgst": false 00:28:58.669 }, 00:28:58.669 "method": "bdev_nvme_attach_controller" 00:28:58.669 },{ 00:28:58.669 "params": { 00:28:58.669 "name": "Nvme9", 00:28:58.669 "trtype": "tcp", 00:28:58.669 "traddr": "10.0.0.2", 00:28:58.669 "adrfam": "ipv4", 00:28:58.669 "trsvcid": "4420", 00:28:58.669 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:58.669 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:58.669 "hdgst": false, 00:28:58.669 "ddgst": false 00:28:58.669 }, 00:28:58.669 "method": "bdev_nvme_attach_controller" 00:28:58.669 },{ 00:28:58.669 "params": { 00:28:58.669 "name": "Nvme10", 00:28:58.669 "trtype": "tcp", 00:28:58.669 "traddr": "10.0.0.2", 00:28:58.669 "adrfam": "ipv4", 00:28:58.669 "trsvcid": "4420", 00:28:58.669 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:58.669 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:58.669 "hdgst": false, 00:28:58.669 "ddgst": false 00:28:58.669 }, 00:28:58.669 "method": "bdev_nvme_attach_controller" 00:28:58.669 }' 00:28:58.928 [2024-12-14 03:10:13.849179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.928 [2024-12-14 03:10:13.871420] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.305 Running I/O for 1 seconds... 00:29:01.500 2259.00 IOPS, 141.19 MiB/s 00:29:01.500 Latency(us) 00:29:01.500 [2024-12-14T02:10:16.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.500 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.500 Verification LBA range: start 0x0 length 0x400 00:29:01.500 Nvme1n1 : 1.10 294.68 18.42 0.00 0.00 214701.48 2777.48 200727.41 00:29:01.500 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.500 Verification LBA range: start 0x0 length 0x400 00:29:01.500 Nvme2n1 : 1.04 245.73 15.36 0.00 0.00 253967.12 18225.25 223696.21 00:29:01.500 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.500 Verification LBA range: start 0x0 length 0x400 00:29:01.500 Nvme3n1 : 1.09 296.58 18.54 0.00 0.00 207049.26 4587.52 193736.90 00:29:01.500 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.500 Verification LBA range: start 0x0 length 0x400 00:29:01.500 Nvme4n1 : 1.14 281.90 17.62 0.00 0.00 215576.87 13606.52 216705.71 00:29:01.500 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.500 Verification LBA range: start 0x0 length 0x400 00:29:01.500 Nvme5n1 : 1.10 233.69 14.61 0.00 0.00 255749.61 18974.23 227690.79 00:29:01.500 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.501 Verification LBA range: start 0x0 length 0x400 00:29:01.501 Nvme6n1 : 1.12 284.73 17.80 0.00 0.00 207182.51 16976.94 222697.57 00:29:01.501 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.501 Verification LBA range: start 0x0 length 0x400 00:29:01.501 Nvme7n1 : 1.13 282.93 17.68 0.00 0.00 205543.42 15541.39 213709.78 00:29:01.501 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.501 Verification LBA range: start 0x0 length 0x400 00:29:01.501 Nvme8n1 : 1.16 330.45 20.65 0.00 0.00 173765.24 11297.16 210713.84 00:29:01.501 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.501 Verification LBA range: start 0x0 length 0x400 00:29:01.501 Nvme9n1 : 1.15 280.77 17.55 0.00 0.00 201228.75 15541.39 213709.78 00:29:01.501 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.501 Verification LBA range: start 0x0 length 0x400 00:29:01.501 Nvme10n1 : 1.17 328.47 20.53 0.00 0.00 169487.93 3900.95 229688.08 00:29:01.501 [2024-12-14T02:10:16.634Z] =================================================================================================================== 00:29:01.501 [2024-12-14T02:10:16.634Z] Total : 2859.94 178.75 0.00 0.00 207112.13 2777.48 229688.08 00:29:01.501 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:01.501 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:01.501 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:01.501 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:01.760 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:01.760 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:01.760 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:01.760 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:01.760 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:01.760 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:01.760 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:01.760 rmmod nvme_tcp 00:29:01.760 rmmod nvme_fabrics 00:29:01.760 rmmod nvme_keyring 00:29:01.760 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:01.760 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:01.760 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:01.760 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 343278 ']' 00:29:01.760 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 343278 00:29:01.760 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 343278 ']' 00:29:01.760 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 343278 00:29:01.760 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:29:01.760 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:01.760 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 343278 00:29:01.760 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:01.760 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:01.760 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 343278' 00:29:01.760 killing process with pid 343278 00:29:01.760 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 343278 00:29:01.760 03:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 343278 00:29:02.020 03:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:02.020 03:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:02.020 03:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:02.020 03:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:02.020 03:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:29:02.020 03:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:02.020 03:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:29:02.020 03:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:02.020 03:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:02.020 03:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.020 03:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.020 03:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:04.557 00:29:04.557 real 0m15.114s 00:29:04.557 user 0m33.774s 00:29:04.557 sys 0m5.698s 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:04.557 ************************************ 00:29:04.557 END TEST nvmf_shutdown_tc1 00:29:04.557 ************************************ 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:04.557 ************************************ 00:29:04.557 START TEST nvmf_shutdown_tc2 00:29:04.557 ************************************ 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:04.557 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:04.558 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:04.558 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:04.558 Found net devices under 0000:af:00.0: cvl_0_0 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:04.558 Found net devices under 0000:af:00.1: cvl_0_1 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:04.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:04.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:29:04.558 00:29:04.558 --- 10.0.0.2 ping statistics --- 00:29:04.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.558 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:04.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:04.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:29:04.558 00:29:04.558 --- 10.0.0.1 ping statistics --- 00:29:04.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.558 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:04.558 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:04.559 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:04.559 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:04.559 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:04.559 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:04.559 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.559 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=343590 00:29:04.559 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 343590 00:29:04.559 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:04.559 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 343590 ']' 00:29:04.559 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.559 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:04.559 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.559 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:04.559 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.559 [2024-12-14 03:10:19.605617] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:04.559 [2024-12-14 03:10:19.605667] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:04.559 [2024-12-14 03:10:19.685790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:04.819 [2024-12-14 03:10:19.708557] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:04.819 [2024-12-14 03:10:19.708594] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:04.819 [2024-12-14 03:10:19.708600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:04.819 [2024-12-14 03:10:19.708607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:04.819 [2024-12-14 03:10:19.708612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:04.819 [2024-12-14 03:10:19.709939] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:04.819 [2024-12-14 03:10:19.710052] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:04.819 [2024-12-14 03:10:19.710157] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.819 [2024-12-14 03:10:19.710159] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.819 [2024-12-14 03:10:19.842290] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.819 03:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.819 Malloc1 00:29:05.078 [2024-12-14 03:10:19.962053] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.078 Malloc2 00:29:05.078 Malloc3 00:29:05.078 Malloc4 00:29:05.078 Malloc5 00:29:05.078 Malloc6 00:29:05.078 Malloc7 00:29:05.338 Malloc8 00:29:05.338 Malloc9 00:29:05.338 Malloc10 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=343646 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 343646 /var/tmp/bdevperf.sock 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 343646 ']' 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:05.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.338 { 00:29:05.338 "params": { 00:29:05.338 "name": "Nvme$subsystem", 00:29:05.338 "trtype": "$TEST_TRANSPORT", 00:29:05.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.338 "adrfam": "ipv4", 00:29:05.338 "trsvcid": "$NVMF_PORT", 00:29:05.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.338 "hdgst": ${hdgst:-false}, 00:29:05.338 "ddgst": ${ddgst:-false} 00:29:05.338 }, 00:29:05.338 "method": "bdev_nvme_attach_controller" 00:29:05.338 } 00:29:05.338 EOF 00:29:05.338 )") 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.338 { 00:29:05.338 "params": { 00:29:05.338 "name": "Nvme$subsystem", 00:29:05.338 "trtype": "$TEST_TRANSPORT", 00:29:05.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.338 "adrfam": "ipv4", 00:29:05.338 "trsvcid": "$NVMF_PORT", 00:29:05.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.338 "hdgst": ${hdgst:-false}, 00:29:05.338 "ddgst": ${ddgst:-false} 00:29:05.338 }, 00:29:05.338 "method": "bdev_nvme_attach_controller" 00:29:05.338 } 00:29:05.338 EOF 00:29:05.338 )") 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.338 { 00:29:05.338 "params": { 00:29:05.338 "name": "Nvme$subsystem", 00:29:05.338 "trtype": "$TEST_TRANSPORT", 00:29:05.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.338 "adrfam": "ipv4", 00:29:05.338 "trsvcid": "$NVMF_PORT", 00:29:05.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.338 "hdgst": ${hdgst:-false}, 00:29:05.338 "ddgst": ${ddgst:-false} 00:29:05.338 }, 00:29:05.338 "method": "bdev_nvme_attach_controller" 00:29:05.338 } 00:29:05.338 EOF 00:29:05.338 )") 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.338 { 00:29:05.338 "params": { 00:29:05.338 "name": "Nvme$subsystem", 00:29:05.338 "trtype": "$TEST_TRANSPORT", 00:29:05.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.338 "adrfam": "ipv4", 00:29:05.338 "trsvcid": "$NVMF_PORT", 00:29:05.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.338 "hdgst": ${hdgst:-false}, 00:29:05.338 "ddgst": ${ddgst:-false} 00:29:05.338 }, 00:29:05.338 "method": "bdev_nvme_attach_controller" 00:29:05.338 } 00:29:05.338 EOF 00:29:05.338 )") 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.338 { 00:29:05.338 "params": { 00:29:05.338 "name": "Nvme$subsystem", 00:29:05.338 "trtype": "$TEST_TRANSPORT", 00:29:05.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.338 "adrfam": "ipv4", 00:29:05.338 "trsvcid": "$NVMF_PORT", 00:29:05.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.338 "hdgst": ${hdgst:-false}, 00:29:05.338 "ddgst": ${ddgst:-false} 00:29:05.338 }, 00:29:05.338 "method": "bdev_nvme_attach_controller" 00:29:05.338 } 00:29:05.338 EOF 00:29:05.338 )") 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.338 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.338 { 00:29:05.338 "params": { 00:29:05.338 "name": "Nvme$subsystem", 00:29:05.338 "trtype": "$TEST_TRANSPORT", 00:29:05.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.339 "adrfam": "ipv4", 00:29:05.339 "trsvcid": "$NVMF_PORT", 00:29:05.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.339 "hdgst": ${hdgst:-false}, 00:29:05.339 "ddgst": ${ddgst:-false} 00:29:05.339 }, 00:29:05.339 "method": "bdev_nvme_attach_controller" 00:29:05.339 } 00:29:05.339 EOF 00:29:05.339 )") 00:29:05.339 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:05.339 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.339 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.339 { 00:29:05.339 "params": { 00:29:05.339 "name": "Nvme$subsystem", 00:29:05.339 "trtype": "$TEST_TRANSPORT", 00:29:05.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.339 "adrfam": "ipv4", 00:29:05.339 "trsvcid": "$NVMF_PORT", 00:29:05.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.339 "hdgst": ${hdgst:-false}, 00:29:05.339 "ddgst": ${ddgst:-false} 00:29:05.339 }, 00:29:05.339 "method": "bdev_nvme_attach_controller" 00:29:05.339 } 00:29:05.339 EOF 00:29:05.339 )") 00:29:05.339 [2024-12-14 03:10:20.432424] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:05.339 [2024-12-14 03:10:20.432473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid343646 ] 00:29:05.339 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:05.339 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.339 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.339 { 00:29:05.339 "params": { 00:29:05.339 "name": "Nvme$subsystem", 00:29:05.339 "trtype": "$TEST_TRANSPORT", 00:29:05.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.339 "adrfam": "ipv4", 00:29:05.339 "trsvcid": "$NVMF_PORT", 00:29:05.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.339 "hdgst": ${hdgst:-false}, 00:29:05.339 "ddgst": ${ddgst:-false} 00:29:05.339 }, 00:29:05.339 "method": "bdev_nvme_attach_controller" 00:29:05.339 } 00:29:05.339 EOF 00:29:05.339 )") 00:29:05.339 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:05.339 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.339 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.339 { 00:29:05.339 "params": { 00:29:05.339 "name": "Nvme$subsystem", 00:29:05.339 "trtype": "$TEST_TRANSPORT", 00:29:05.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.339 "adrfam": "ipv4", 00:29:05.339 "trsvcid": "$NVMF_PORT", 00:29:05.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.339 "hdgst": ${hdgst:-false}, 00:29:05.339 "ddgst": ${ddgst:-false} 00:29:05.339 }, 00:29:05.339 "method": "bdev_nvme_attach_controller" 00:29:05.339 } 00:29:05.339 EOF 00:29:05.339 )") 00:29:05.339 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:05.339 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.339 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.339 { 00:29:05.339 "params": { 00:29:05.339 "name": "Nvme$subsystem", 00:29:05.339 "trtype": "$TEST_TRANSPORT", 00:29:05.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.339 "adrfam": "ipv4", 00:29:05.339 "trsvcid": "$NVMF_PORT", 00:29:05.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.339 "hdgst": ${hdgst:-false}, 00:29:05.339 "ddgst": ${ddgst:-false} 00:29:05.339 }, 00:29:05.339 "method": "bdev_nvme_attach_controller" 00:29:05.339 } 00:29:05.339 EOF 00:29:05.339 )") 00:29:05.339 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:05.339 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:29:05.339 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:29:05.339 03:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:05.339 "params": { 00:29:05.339 "name": "Nvme1", 00:29:05.339 "trtype": "tcp", 00:29:05.339 "traddr": "10.0.0.2", 00:29:05.339 "adrfam": "ipv4", 00:29:05.339 "trsvcid": "4420", 00:29:05.339 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:05.339 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:05.339 "hdgst": false, 00:29:05.339 "ddgst": false 00:29:05.339 }, 00:29:05.339 "method": "bdev_nvme_attach_controller" 00:29:05.339 },{ 00:29:05.339 "params": { 00:29:05.339 "name": "Nvme2", 00:29:05.339 "trtype": "tcp", 00:29:05.339 "traddr": "10.0.0.2", 00:29:05.339 "adrfam": "ipv4", 00:29:05.339 "trsvcid": "4420", 00:29:05.339 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:05.339 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:05.339 "hdgst": false, 00:29:05.339 "ddgst": false 00:29:05.339 }, 00:29:05.339 "method": "bdev_nvme_attach_controller" 00:29:05.339 },{ 00:29:05.339 "params": { 00:29:05.339 "name": "Nvme3", 00:29:05.339 "trtype": "tcp", 00:29:05.339 "traddr": "10.0.0.2", 00:29:05.339 "adrfam": "ipv4", 00:29:05.339 "trsvcid": "4420", 00:29:05.339 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:05.339 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:05.339 "hdgst": false, 00:29:05.339 "ddgst": false 00:29:05.339 }, 00:29:05.339 "method": "bdev_nvme_attach_controller" 00:29:05.339 },{ 00:29:05.339 "params": { 00:29:05.339 "name": "Nvme4", 00:29:05.339 "trtype": "tcp", 00:29:05.339 "traddr": "10.0.0.2", 00:29:05.339 "adrfam": "ipv4", 00:29:05.339 "trsvcid": "4420", 00:29:05.339 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:05.339 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:05.339 "hdgst": false, 00:29:05.339 "ddgst": false 00:29:05.339 }, 00:29:05.339 "method": "bdev_nvme_attach_controller" 00:29:05.339 },{ 00:29:05.339 "params": { 00:29:05.339 "name": "Nvme5", 00:29:05.339 "trtype": "tcp", 00:29:05.339 "traddr": "10.0.0.2", 00:29:05.339 "adrfam": "ipv4", 00:29:05.339 "trsvcid": "4420", 00:29:05.339 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:05.339 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:05.339 "hdgst": false, 00:29:05.339 "ddgst": false 00:29:05.339 }, 00:29:05.339 "method": "bdev_nvme_attach_controller" 00:29:05.339 },{ 00:29:05.339 "params": { 00:29:05.339 "name": "Nvme6", 00:29:05.339 "trtype": "tcp", 00:29:05.339 "traddr": "10.0.0.2", 00:29:05.339 "adrfam": "ipv4", 00:29:05.339 "trsvcid": "4420", 00:29:05.339 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:05.339 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:05.339 "hdgst": false, 00:29:05.339 "ddgst": false 00:29:05.339 }, 00:29:05.339 "method": "bdev_nvme_attach_controller" 00:29:05.339 },{ 00:29:05.339 "params": { 00:29:05.339 "name": "Nvme7", 00:29:05.339 "trtype": "tcp", 00:29:05.339 "traddr": "10.0.0.2", 00:29:05.339 "adrfam": "ipv4", 00:29:05.339 "trsvcid": "4420", 00:29:05.339 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:05.339 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:05.339 "hdgst": false, 00:29:05.339 "ddgst": false 00:29:05.339 }, 00:29:05.339 "method": "bdev_nvme_attach_controller" 00:29:05.339 },{ 00:29:05.339 "params": { 00:29:05.339 "name": "Nvme8", 00:29:05.339 "trtype": "tcp", 00:29:05.339 "traddr": "10.0.0.2", 00:29:05.339 "adrfam": "ipv4", 00:29:05.339 "trsvcid": "4420", 00:29:05.339 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:05.339 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:05.339 "hdgst": false, 00:29:05.339 "ddgst": false 00:29:05.339 }, 00:29:05.340 "method": "bdev_nvme_attach_controller" 00:29:05.340 },{ 00:29:05.340 "params": { 00:29:05.340 "name": "Nvme9", 00:29:05.340 "trtype": "tcp", 00:29:05.340 "traddr": "10.0.0.2", 00:29:05.340 "adrfam": "ipv4", 00:29:05.340 "trsvcid": "4420", 00:29:05.340 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:05.340 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:05.340 "hdgst": false, 00:29:05.340 "ddgst": false 00:29:05.340 }, 00:29:05.340 "method": "bdev_nvme_attach_controller" 00:29:05.340 },{ 00:29:05.340 "params": { 00:29:05.340 "name": "Nvme10", 00:29:05.340 "trtype": "tcp", 00:29:05.340 "traddr": "10.0.0.2", 00:29:05.340 "adrfam": "ipv4", 00:29:05.340 "trsvcid": "4420", 00:29:05.340 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:05.340 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:05.340 "hdgst": false, 00:29:05.340 "ddgst": false 00:29:05.340 }, 00:29:05.340 "method": "bdev_nvme_attach_controller" 00:29:05.340 }' 00:29:05.598 [2024-12-14 03:10:20.508155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.598 [2024-12-14 03:10:20.530199] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.972 Running I/O for 10 seconds... 00:29:07.231 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.231 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:07.231 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:07.231 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.231 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.231 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.231 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:07.231 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:07.231 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:07.231 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:07.231 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:07.231 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:07.231 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:07.231 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:07.231 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:07.231 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.231 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.489 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.489 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=15 00:29:07.489 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 15 -ge 100 ']' 00:29:07.489 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:07.748 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:07.748 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:07.748 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:07.748 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:07.748 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.748 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.748 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.748 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:07.748 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:07.748 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:07.748 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:07.748 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:07.748 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 343646 00:29:07.748 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 343646 ']' 00:29:07.748 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 343646 00:29:07.748 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:07.748 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:07.748 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 343646 00:29:07.748 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:07.748 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:07.748 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 343646' 00:29:07.748 killing process with pid 343646 00:29:07.748 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 343646 00:29:07.748 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 343646 00:29:07.748 Received shutdown signal, test time was about 0.733777 seconds 00:29:07.748 00:29:07.748 Latency(us) 00:29:07.748 [2024-12-14T02:10:22.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.748 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.748 Verification LBA range: start 0x0 length 0x400 00:29:07.748 Nvme1n1 : 0.71 271.29 16.96 0.00 0.00 232591.03 21221.18 184749.10 00:29:07.748 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.748 Verification LBA range: start 0x0 length 0x400 00:29:07.748 Nvme2n1 : 0.72 265.92 16.62 0.00 0.00 232083.91 18474.91 214708.42 00:29:07.748 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.748 Verification LBA range: start 0x0 length 0x400 00:29:07.748 Nvme3n1 : 0.71 277.38 17.34 0.00 0.00 216074.35 4556.31 206719.27 00:29:07.748 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.748 Verification LBA range: start 0x0 length 0x400 00:29:07.748 Nvme4n1 : 0.73 351.94 22.00 0.00 0.00 166764.25 11921.31 212711.13 00:29:07.748 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.748 Verification LBA range: start 0x0 length 0x400 00:29:07.748 Nvme5n1 : 0.72 267.30 16.71 0.00 0.00 215431.72 16852.11 222697.57 00:29:07.748 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.748 Verification LBA range: start 0x0 length 0x400 00:29:07.748 Nvme6n1 : 0.71 269.15 16.82 0.00 0.00 208364.17 16727.28 196732.83 00:29:07.748 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.748 Verification LBA range: start 0x0 length 0x400 00:29:07.748 Nvme7n1 : 0.70 281.84 17.61 0.00 0.00 191282.03 3994.58 200727.41 00:29:07.748 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.748 Verification LBA range: start 0x0 length 0x400 00:29:07.748 Nvme8n1 : 0.70 273.80 17.11 0.00 0.00 193235.95 17351.44 208716.56 00:29:07.748 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.748 Verification LBA range: start 0x0 length 0x400 00:29:07.748 Nvme9n1 : 0.73 264.64 16.54 0.00 0.00 197184.04 23592.96 220700.28 00:29:07.748 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.748 Verification LBA range: start 0x0 length 0x400 00:29:07.748 Nvme10n1 : 0.73 261.89 16.37 0.00 0.00 193855.23 15728.64 237677.23 00:29:07.748 [2024-12-14T02:10:22.881Z] =================================================================================================================== 00:29:07.748 [2024-12-14T02:10:22.881Z] Total : 2785.17 174.07 0.00 0.00 203464.44 3994.58 237677.23 00:29:08.007 03:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:08.942 03:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 343590 00:29:08.942 03:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:08.942 03:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:08.942 03:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:08.942 03:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:08.942 03:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:08.942 03:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:08.942 03:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:08.942 03:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:08.942 03:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:08.942 03:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:08.942 03:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:08.942 rmmod nvme_tcp 00:29:08.942 rmmod nvme_fabrics 00:29:08.942 rmmod nvme_keyring 00:29:08.942 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:08.942 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:08.942 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:08.942 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 343590 ']' 00:29:08.942 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 343590 00:29:08.942 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 343590 ']' 00:29:08.942 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 343590 00:29:08.942 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:08.942 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:08.942 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 343590 00:29:09.200 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:09.201 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:09.201 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 343590' 00:29:09.201 killing process with pid 343590 00:29:09.201 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 343590 00:29:09.201 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 343590 00:29:09.460 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:09.460 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:09.460 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:09.460 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:09.460 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:29:09.460 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:09.460 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:29:09.460 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:09.460 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:09.460 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.460 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.460 03:10:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:11.994 00:29:11.994 real 0m7.301s 00:29:11.994 user 0m21.434s 00:29:11.994 sys 0m1.309s 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.994 ************************************ 00:29:11.994 END TEST nvmf_shutdown_tc2 00:29:11.994 ************************************ 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:11.994 ************************************ 00:29:11.994 START TEST nvmf_shutdown_tc3 00:29:11.994 ************************************ 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:11.994 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:11.995 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:11.995 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:11.995 Found net devices under 0000:af:00.0: cvl_0_0 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:11.995 Found net devices under 0000:af:00.1: cvl_0_1 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:11.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:11.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:29:11.995 00:29:11.995 --- 10.0.0.2 ping statistics --- 00:29:11.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.995 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:11.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:11.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:29:11.995 00:29:11.995 --- 10.0.0.1 ping statistics --- 00:29:11.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.995 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=343860 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 343860 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 343860 ']' 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.995 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:11.996 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.996 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:11.996 03:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:11.996 [2024-12-14 03:10:26.992615] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:11.996 [2024-12-14 03:10:26.992657] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.996 [2024-12-14 03:10:27.053605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:11.996 [2024-12-14 03:10:27.076371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.996 [2024-12-14 03:10:27.076406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.996 [2024-12-14 03:10:27.076414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.996 [2024-12-14 03:10:27.076420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.996 [2024-12-14 03:10:27.076427] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.996 [2024-12-14 03:10:27.077674] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:11.996 [2024-12-14 03:10:27.077780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:11.996 [2024-12-14 03:10:27.077889] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.996 [2024-12-14 03:10:27.077890] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:12.255 [2024-12-14 03:10:27.221440] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.255 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:12.255 Malloc1 00:29:12.255 [2024-12-14 03:10:27.334957] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.255 Malloc2 00:29:12.514 Malloc3 00:29:12.514 Malloc4 00:29:12.514 Malloc5 00:29:12.514 Malloc6 00:29:12.514 Malloc7 00:29:12.514 Malloc8 00:29:12.774 Malloc9 00:29:12.774 Malloc10 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=343916 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 343916 /var/tmp/bdevperf.sock 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 343916 ']' 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:12.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.774 { 00:29:12.774 "params": { 00:29:12.774 "name": "Nvme$subsystem", 00:29:12.774 "trtype": "$TEST_TRANSPORT", 00:29:12.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.774 "adrfam": "ipv4", 00:29:12.774 "trsvcid": "$NVMF_PORT", 00:29:12.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.774 "hdgst": ${hdgst:-false}, 00:29:12.774 "ddgst": ${ddgst:-false} 00:29:12.774 }, 00:29:12.774 "method": "bdev_nvme_attach_controller" 00:29:12.774 } 00:29:12.774 EOF 00:29:12.774 )") 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.774 { 00:29:12.774 "params": { 00:29:12.774 "name": "Nvme$subsystem", 00:29:12.774 "trtype": "$TEST_TRANSPORT", 00:29:12.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.774 "adrfam": "ipv4", 00:29:12.774 "trsvcid": "$NVMF_PORT", 00:29:12.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.774 "hdgst": ${hdgst:-false}, 00:29:12.774 "ddgst": ${ddgst:-false} 00:29:12.774 }, 00:29:12.774 "method": "bdev_nvme_attach_controller" 00:29:12.774 } 00:29:12.774 EOF 00:29:12.774 )") 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.774 { 00:29:12.774 "params": { 00:29:12.774 "name": "Nvme$subsystem", 00:29:12.774 "trtype": "$TEST_TRANSPORT", 00:29:12.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.774 "adrfam": "ipv4", 00:29:12.774 "trsvcid": "$NVMF_PORT", 00:29:12.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.774 "hdgst": ${hdgst:-false}, 00:29:12.774 "ddgst": ${ddgst:-false} 00:29:12.774 }, 00:29:12.774 "method": "bdev_nvme_attach_controller" 00:29:12.774 } 00:29:12.774 EOF 00:29:12.774 )") 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.774 { 00:29:12.774 "params": { 00:29:12.774 "name": "Nvme$subsystem", 00:29:12.774 "trtype": "$TEST_TRANSPORT", 00:29:12.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.774 "adrfam": "ipv4", 00:29:12.774 "trsvcid": "$NVMF_PORT", 00:29:12.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.774 "hdgst": ${hdgst:-false}, 00:29:12.774 "ddgst": ${ddgst:-false} 00:29:12.774 }, 00:29:12.774 "method": "bdev_nvme_attach_controller" 00:29:12.774 } 00:29:12.774 EOF 00:29:12.774 )") 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.774 { 00:29:12.774 "params": { 00:29:12.774 "name": "Nvme$subsystem", 00:29:12.774 "trtype": "$TEST_TRANSPORT", 00:29:12.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.774 "adrfam": "ipv4", 00:29:12.774 "trsvcid": "$NVMF_PORT", 00:29:12.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.774 "hdgst": ${hdgst:-false}, 00:29:12.774 "ddgst": ${ddgst:-false} 00:29:12.774 }, 00:29:12.774 "method": "bdev_nvme_attach_controller" 00:29:12.774 } 00:29:12.774 EOF 00:29:12.774 )") 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.774 { 00:29:12.774 "params": { 00:29:12.774 "name": "Nvme$subsystem", 00:29:12.774 "trtype": "$TEST_TRANSPORT", 00:29:12.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.774 "adrfam": "ipv4", 00:29:12.774 "trsvcid": "$NVMF_PORT", 00:29:12.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.774 "hdgst": ${hdgst:-false}, 00:29:12.774 "ddgst": ${ddgst:-false} 00:29:12.774 }, 00:29:12.774 "method": "bdev_nvme_attach_controller" 00:29:12.774 } 00:29:12.774 EOF 00:29:12.774 )") 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.774 { 00:29:12.774 "params": { 00:29:12.774 "name": "Nvme$subsystem", 00:29:12.774 "trtype": "$TEST_TRANSPORT", 00:29:12.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.774 "adrfam": "ipv4", 00:29:12.774 "trsvcid": "$NVMF_PORT", 00:29:12.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.774 "hdgst": ${hdgst:-false}, 00:29:12.774 "ddgst": ${ddgst:-false} 00:29:12.774 }, 00:29:12.774 "method": "bdev_nvme_attach_controller" 00:29:12.774 } 00:29:12.774 EOF 00:29:12.774 )") 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:12.774 [2024-12-14 03:10:27.813173] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:12.774 [2024-12-14 03:10:27.813217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid343916 ] 00:29:12.774 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.775 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.775 { 00:29:12.775 "params": { 00:29:12.775 "name": "Nvme$subsystem", 00:29:12.775 "trtype": "$TEST_TRANSPORT", 00:29:12.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.775 "adrfam": "ipv4", 00:29:12.775 "trsvcid": "$NVMF_PORT", 00:29:12.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.775 "hdgst": ${hdgst:-false}, 00:29:12.775 "ddgst": ${ddgst:-false} 00:29:12.775 }, 00:29:12.775 "method": "bdev_nvme_attach_controller" 00:29:12.775 } 00:29:12.775 EOF 00:29:12.775 )") 00:29:12.775 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:12.775 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.775 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.775 { 00:29:12.775 "params": { 00:29:12.775 "name": "Nvme$subsystem", 00:29:12.775 "trtype": "$TEST_TRANSPORT", 00:29:12.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.775 "adrfam": "ipv4", 00:29:12.775 "trsvcid": "$NVMF_PORT", 00:29:12.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.775 "hdgst": ${hdgst:-false}, 00:29:12.775 "ddgst": ${ddgst:-false} 00:29:12.775 }, 00:29:12.775 "method": "bdev_nvme_attach_controller" 00:29:12.775 } 00:29:12.775 EOF 00:29:12.775 )") 00:29:12.775 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:12.775 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.775 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.775 { 00:29:12.775 "params": { 00:29:12.775 "name": "Nvme$subsystem", 00:29:12.775 "trtype": "$TEST_TRANSPORT", 00:29:12.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.775 "adrfam": "ipv4", 00:29:12.775 "trsvcid": "$NVMF_PORT", 00:29:12.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.775 "hdgst": ${hdgst:-false}, 00:29:12.775 "ddgst": ${ddgst:-false} 00:29:12.775 }, 00:29:12.775 "method": "bdev_nvme_attach_controller" 00:29:12.775 } 00:29:12.775 EOF 00:29:12.775 )") 00:29:12.775 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:12.775 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:12.775 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:12.775 03:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:12.775 "params": { 00:29:12.775 "name": "Nvme1", 00:29:12.775 "trtype": "tcp", 00:29:12.775 "traddr": "10.0.0.2", 00:29:12.775 "adrfam": "ipv4", 00:29:12.775 "trsvcid": "4420", 00:29:12.775 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:12.775 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:12.775 "hdgst": false, 00:29:12.775 "ddgst": false 00:29:12.775 }, 00:29:12.775 "method": "bdev_nvme_attach_controller" 00:29:12.775 },{ 00:29:12.775 "params": { 00:29:12.775 "name": "Nvme2", 00:29:12.775 "trtype": "tcp", 00:29:12.775 "traddr": "10.0.0.2", 00:29:12.775 "adrfam": "ipv4", 00:29:12.775 "trsvcid": "4420", 00:29:12.775 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:12.775 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:12.775 "hdgst": false, 00:29:12.775 "ddgst": false 00:29:12.775 }, 00:29:12.775 "method": "bdev_nvme_attach_controller" 00:29:12.775 },{ 00:29:12.775 "params": { 00:29:12.775 "name": "Nvme3", 00:29:12.775 "trtype": "tcp", 00:29:12.775 "traddr": "10.0.0.2", 00:29:12.775 "adrfam": "ipv4", 00:29:12.775 "trsvcid": "4420", 00:29:12.775 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:12.775 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:12.775 "hdgst": false, 00:29:12.775 "ddgst": false 00:29:12.775 }, 00:29:12.775 "method": "bdev_nvme_attach_controller" 00:29:12.775 },{ 00:29:12.775 "params": { 00:29:12.775 "name": "Nvme4", 00:29:12.775 "trtype": "tcp", 00:29:12.775 "traddr": "10.0.0.2", 00:29:12.775 "adrfam": "ipv4", 00:29:12.775 "trsvcid": "4420", 00:29:12.775 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:12.775 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:12.775 "hdgst": false, 00:29:12.775 "ddgst": false 00:29:12.775 }, 00:29:12.775 "method": "bdev_nvme_attach_controller" 00:29:12.775 },{ 00:29:12.775 "params": { 00:29:12.775 "name": "Nvme5", 00:29:12.775 "trtype": "tcp", 00:29:12.775 "traddr": "10.0.0.2", 00:29:12.775 "adrfam": "ipv4", 00:29:12.775 "trsvcid": "4420", 00:29:12.775 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:12.775 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:12.775 "hdgst": false, 00:29:12.775 "ddgst": false 00:29:12.775 }, 00:29:12.775 "method": "bdev_nvme_attach_controller" 00:29:12.775 },{ 00:29:12.775 "params": { 00:29:12.775 "name": "Nvme6", 00:29:12.775 "trtype": "tcp", 00:29:12.775 "traddr": "10.0.0.2", 00:29:12.775 "adrfam": "ipv4", 00:29:12.775 "trsvcid": "4420", 00:29:12.775 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:12.775 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:12.775 "hdgst": false, 00:29:12.775 "ddgst": false 00:29:12.775 }, 00:29:12.775 "method": "bdev_nvme_attach_controller" 00:29:12.775 },{ 00:29:12.775 "params": { 00:29:12.775 "name": "Nvme7", 00:29:12.775 "trtype": "tcp", 00:29:12.775 "traddr": "10.0.0.2", 00:29:12.775 "adrfam": "ipv4", 00:29:12.775 "trsvcid": "4420", 00:29:12.775 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:12.775 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:12.775 "hdgst": false, 00:29:12.775 "ddgst": false 00:29:12.775 }, 00:29:12.775 "method": "bdev_nvme_attach_controller" 00:29:12.775 },{ 00:29:12.775 "params": { 00:29:12.775 "name": "Nvme8", 00:29:12.775 "trtype": "tcp", 00:29:12.775 "traddr": "10.0.0.2", 00:29:12.775 "adrfam": "ipv4", 00:29:12.775 "trsvcid": "4420", 00:29:12.775 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:12.775 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:12.775 "hdgst": false, 00:29:12.775 "ddgst": false 00:29:12.775 }, 00:29:12.775 "method": "bdev_nvme_attach_controller" 00:29:12.775 },{ 00:29:12.775 "params": { 00:29:12.775 "name": "Nvme9", 00:29:12.775 "trtype": "tcp", 00:29:12.775 "traddr": "10.0.0.2", 00:29:12.775 "adrfam": "ipv4", 00:29:12.775 "trsvcid": "4420", 00:29:12.775 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:12.775 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:12.775 "hdgst": false, 00:29:12.775 "ddgst": false 00:29:12.775 }, 00:29:12.775 "method": "bdev_nvme_attach_controller" 00:29:12.775 },{ 00:29:12.775 "params": { 00:29:12.775 "name": "Nvme10", 00:29:12.775 "trtype": "tcp", 00:29:12.775 "traddr": "10.0.0.2", 00:29:12.775 "adrfam": "ipv4", 00:29:12.775 "trsvcid": "4420", 00:29:12.775 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:12.775 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:12.775 "hdgst": false, 00:29:12.775 "ddgst": false 00:29:12.775 }, 00:29:12.775 "method": "bdev_nvme_attach_controller" 00:29:12.775 }' 00:29:12.775 [2024-12-14 03:10:27.875487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.775 [2024-12-14 03:10:27.897667] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.678 Running I/O for 10 seconds... 00:29:14.678 03:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:14.678 03:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:14.678 03:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:14.678 03:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.678 03:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:14.678 03:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.678 03:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:14.678 03:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:14.679 03:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:14.679 03:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:14.679 03:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:14.679 03:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:14.679 03:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:14.679 03:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:14.679 03:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:14.679 03:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:14.679 03:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.679 03:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:14.679 03:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.679 03:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=65 00:29:14.679 03:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 65 -ge 100 ']' 00:29:14.679 03:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:14.937 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:14.937 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:14.937 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:14.937 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:14.937 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.937 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:14.937 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.937 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:14.937 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:14.937 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:14.937 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:14.937 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:14.937 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 343860 00:29:14.937 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 343860 ']' 00:29:14.937 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 343860 00:29:14.937 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:14.937 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:14.937 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 343860 00:29:15.211 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:15.211 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:15.211 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 343860' 00:29:15.211 killing process with pid 343860 00:29:15.211 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 343860 00:29:15.211 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 343860 00:29:15.211 [2024-12-14 03:10:30.112184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.211 [2024-12-14 03:10:30.112594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.112600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.112606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.112613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.112619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.112625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.112631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.112637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.112643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f00 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.117692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c920 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.118678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.118691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.118697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.118703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.118712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.118719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.118726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.118733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.118739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.118746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.118753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.212 [2024-12-14 03:10:30.118759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.118998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.119004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.119010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.119016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.119023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.119029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.119036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.119042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.119047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.119053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.119059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.119065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.119071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.119078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.119084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.119091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7083f0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.213 [2024-12-14 03:10:30.120450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.120621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7088c0 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.214 [2024-12-14 03:10:30.123647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.123653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.123659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.123665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.123671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x709c40 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.124890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70a130 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.125467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.125486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.125493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.215 [2024-12-14 03:10:30.125499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.125884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c450 is same with the state(6) to be set 00:29:15.216 [2024-12-14 03:10:30.126763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.216 [2024-12-14 03:10:30.126794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.216 [2024-12-14 03:10:30.126811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.216 [2024-12-14 03:10:30.126819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.216 [2024-12-14 03:10:30.126827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.216 [2024-12-14 03:10:30.126834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.216 [2024-12-14 03:10:30.126843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.216 [2024-12-14 03:10:30.126850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.216 [2024-12-14 03:10:30.126858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.216 [2024-12-14 03:10:30.126865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.216 [2024-12-14 03:10:30.126872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.216 [2024-12-14 03:10:30.126880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.216 [2024-12-14 03:10:30.126888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.216 [2024-12-14 03:10:30.126895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.216 [2024-12-14 03:10:30.126903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.216 [2024-12-14 03:10:30.126909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.216 [2024-12-14 03:10:30.126917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.216 [2024-12-14 03:10:30.126924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.216 [2024-12-14 03:10:30.126931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.216 [2024-12-14 03:10:30.126938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.216 [2024-12-14 03:10:30.126945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.216 [2024-12-14 03:10:30.126951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.216 [2024-12-14 03:10:30.126963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.216 [2024-12-14 03:10:30.126970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.216 [2024-12-14 03:10:30.126978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.216 [2024-12-14 03:10:30.126984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.126992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.126999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.217 [2024-12-14 03:10:30.127598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.217 [2024-12-14 03:10:30.127607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.218 [2024-12-14 03:10:30.127614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.127622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.218 [2024-12-14 03:10:30.127628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.127636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.218 [2024-12-14 03:10:30.127642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.127650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.218 [2024-12-14 03:10:30.127657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.127665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.218 [2024-12-14 03:10:30.127671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.127679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.218 [2024-12-14 03:10:30.127686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.127694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.218 [2024-12-14 03:10:30.127701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.127709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.218 [2024-12-14 03:10:30.127715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.127723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.218 [2024-12-14 03:10:30.127731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.127740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.218 [2024-12-14 03:10:30.127746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.127754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.218 [2024-12-14 03:10:30.127761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.127789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.218 [2024-12-14 03:10:30.128057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2980 is same with the state(6) to be set 00:29:15.218 [2024-12-14 03:10:30.128155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9f920 is same with the state(6) to be set 00:29:15.218 [2024-12-14 03:10:30.128236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94d260 is same with the state(6) to be set 00:29:15.218 [2024-12-14 03:10:30.128337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94d080 is same with the state(6) to be set 00:29:15.218 [2024-12-14 03:10:30.128420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c990 is same with the state(6) to be set 00:29:15.218 [2024-12-14 03:10:30.128506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94ba10 is same with the state(6) to be set 00:29:15.218 [2024-12-14 03:10:30.128585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.218 [2024-12-14 03:10:30.128616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.218 [2024-12-14 03:10:30.128623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.128629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.219 [2024-12-14 03:10:30.128639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.128645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x956cd0 is same with the state(6) to be set 00:29:15.219 [2024-12-14 03:10:30.128667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.219 [2024-12-14 03:10:30.128675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.128682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.219 [2024-12-14 03:10:30.128689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.128697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.219 [2024-12-14 03:10:30.128704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.128710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.219 [2024-12-14 03:10:30.128718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.128724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957140 is same with the state(6) to be set 00:29:15.219 [2024-12-14 03:10:30.128746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.219 [2024-12-14 03:10:30.128755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.128762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.219 [2024-12-14 03:10:30.128768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.128776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.219 [2024-12-14 03:10:30.128782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.128789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.219 [2024-12-14 03:10:30.128795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.128801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda0d30 is same with the state(6) to be set 00:29:15.219 [2024-12-14 03:10:30.128820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.219 [2024-12-14 03:10:30.128828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.128836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.219 [2024-12-14 03:10:30.128842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.128853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.219 [2024-12-14 03:10:30.128860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.128867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.219 [2024-12-14 03:10:30.128873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.128879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76340 is same with the state(6) to be set 00:29:15.219 [2024-12-14 03:10:30.129107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.219 [2024-12-14 03:10:30.129502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.219 [2024-12-14 03:10:30.129510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.129517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.129525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.129531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.129539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.129548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.129556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.129563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.129570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.220 [2024-12-14 03:10:30.140715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.220 [2024-12-14 03:10:30.140722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.140730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.140737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.140745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.140752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.140760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.140766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.142097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:15.221 [2024-12-14 03:10:30.142147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda0d30 (9): Bad file descriptor 00:29:15.221 [2024-12-14 03:10:30.142202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde2980 (9): Bad file descriptor 00:29:15.221 [2024-12-14 03:10:30.142229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9f920 (9): Bad file descriptor 00:29:15.221 [2024-12-14 03:10:30.142244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94d260 (9): Bad file descriptor 00:29:15.221 [2024-12-14 03:10:30.142263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94d080 (9): Bad file descriptor 00:29:15.221 [2024-12-14 03:10:30.142281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9c990 (9): Bad file descriptor 00:29:15.221 [2024-12-14 03:10:30.142295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94ba10 (9): Bad file descriptor 00:29:15.221 [2024-12-14 03:10:30.142322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x956cd0 (9): Bad file descriptor 00:29:15.221 [2024-12-14 03:10:30.142342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x957140 (9): Bad file descriptor 00:29:15.221 [2024-12-14 03:10:30.142361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76340 (9): Bad file descriptor 00:29:15.221 [2024-12-14 03:10:30.145277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.145309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.145344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.145356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.145369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.145379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.145390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.145401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.145412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.145422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.145434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.145443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.145455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.145464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.145476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.145486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.145497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd54ee0 is same with the state(6) to be set 00:29:15.221 [2024-12-14 03:10:30.146066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:15.221 [2024-12-14 03:10:30.146234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.221 [2024-12-14 03:10:30.146256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda0d30 with addr=10.0.0.2, port=4420 00:29:15.221 [2024-12-14 03:10:30.146268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda0d30 is same with the state(6) to be set 00:29:15.221 [2024-12-14 03:10:30.147819] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:15.221 [2024-12-14 03:10:30.147883] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:15.221 [2024-12-14 03:10:30.147935] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:15.221 [2024-12-14 03:10:30.147987] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:15.221 [2024-12-14 03:10:30.148003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:15.221 [2024-12-14 03:10:30.148204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.221 [2024-12-14 03:10:30.148224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76340 with addr=10.0.0.2, port=4420 00:29:15.221 [2024-12-14 03:10:30.148236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76340 is same with the state(6) to be set 00:29:15.221 [2024-12-14 03:10:30.148249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda0d30 (9): Bad file descriptor 00:29:15.221 [2024-12-14 03:10:30.148326] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:15.221 [2024-12-14 03:10:30.148380] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:15.221 [2024-12-14 03:10:30.148431] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:15.221 [2024-12-14 03:10:30.148606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.221 [2024-12-14 03:10:30.148625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94ba10 with addr=10.0.0.2, port=4420 00:29:15.221 [2024-12-14 03:10:30.148636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94ba10 is same with the state(6) to be set 00:29:15.221 [2024-12-14 03:10:30.148648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76340 (9): Bad file descriptor 00:29:15.221 [2024-12-14 03:10:30.148662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:15.221 [2024-12-14 03:10:30.148671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:15.221 [2024-12-14 03:10:30.148682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:15.221 [2024-12-14 03:10:30.148693] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:15.221 [2024-12-14 03:10:30.148728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.148742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.148759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.148770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.148783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.148793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.148805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.148820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.148831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.148841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.148852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.148862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.148873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.148883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.148895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.148904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.148916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.148925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.148936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.148946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.148957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.148966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.148978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.148987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.148998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.149008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.221 [2024-12-14 03:10:30.149019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.221 [2024-12-14 03:10:30.149029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.222 [2024-12-14 03:10:30.149891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.222 [2024-12-14 03:10:30.149899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.149911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.149919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.149931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.149940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.149951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.149960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.149972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.149980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.149992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.150000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.150012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.150020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.150032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.150040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.150052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.150060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.150071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.150081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.150092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9acd0 is same with the state(6) to be set 00:29:15.223 [2024-12-14 03:10:30.150507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94ba10 (9): Bad file descriptor 00:29:15.223 [2024-12-14 03:10:30.150523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:15.223 [2024-12-14 03:10:30.150532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:15.223 [2024-12-14 03:10:30.150546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:15.223 [2024-12-14 03:10:30.150555] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:15.223 [2024-12-14 03:10:30.151791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:15.223 [2024-12-14 03:10:30.151826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:15.223 [2024-12-14 03:10:30.151837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:15.223 [2024-12-14 03:10:30.151848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:15.223 [2024-12-14 03:10:30.151858] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:15.223 [2024-12-14 03:10:30.152165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.223 [2024-12-14 03:10:30.152184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x956cd0 with addr=10.0.0.2, port=4420 00:29:15.223 [2024-12-14 03:10:30.152195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x956cd0 is same with the state(6) to be set 00:29:15.223 [2024-12-14 03:10:30.152468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x956cd0 (9): Bad file descriptor 00:29:15.223 [2024-12-14 03:10:30.152574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:15.223 [2024-12-14 03:10:30.152585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:15.223 [2024-12-14 03:10:30.152593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:15.223 [2024-12-14 03:10:30.152600] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:15.223 [2024-12-14 03:10:30.152639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.152649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.152662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.152670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.152679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.152687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.152697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.152704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.152713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.152720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.152729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.152737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.152745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.152756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.152765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.152772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.152781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.152788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.152798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.152805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.152813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.152821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.152829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.152837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.152845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.152853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.152861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.152868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.152877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.152884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.152893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.152900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.152909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.152917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.152925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.152933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.152941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.152948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.152959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.152966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.152975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.152982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.152991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.152998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.223 [2024-12-14 03:10:30.153006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.223 [2024-12-14 03:10:30.153013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.224 [2024-12-14 03:10:30.153585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.224 [2024-12-14 03:10:30.153592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.153602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.153608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.153617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.153624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.153632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.153639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.153647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.153653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.153662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.153669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.153677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe82af0 is same with the state(6) to be set 00:29:15.225 [2024-12-14 03:10:30.154692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.154705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.154716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.154723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.154732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.154738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.154748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.154755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.154765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.154774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.154783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.154790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.154798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.154805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.154813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.154820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.154829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.154837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.154846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.154853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.154862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.154868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.154877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.154883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.154891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.154898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.154906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.154913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.154922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.154929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.154937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.154944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.154953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.154959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.154970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.154977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.154986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.154993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.155001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.155009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.155018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.155026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.155034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.155040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.155049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.155056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.155064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.155071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.155079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.155087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.155096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.155102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.155112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.155118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.155127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.155135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.155143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.155150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.155159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.155168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.155177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.155184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.155193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.155199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.155207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.155215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.155223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.155230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.155238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.225 [2024-12-14 03:10:30.155245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.225 [2024-12-14 03:10:30.155253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.155705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.155712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd47d40 is same with the state(6) to be set 00:29:15.226 [2024-12-14 03:10:30.156685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.156696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.156707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.156715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.156723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.156736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.156744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.156751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.156760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.156766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.156777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.156784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.156792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.156799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.156808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.156816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.156825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.156831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.156840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.156847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.156854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.226 [2024-12-14 03:10:30.156862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.226 [2024-12-14 03:10:30.156870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.156877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.156885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.156891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.156900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.156907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.156914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.156921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.156932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.156939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.156947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.156954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.156963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.156970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.156978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.156986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.156994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.227 [2024-12-14 03:10:30.157501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.227 [2024-12-14 03:10:30.157508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.157517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.157523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.157535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.157542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.157551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.157558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.157566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.157573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.157582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.157589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.157597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.157604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.157613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.157620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.157628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.157634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.157643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.157650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.157658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.157665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.157674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.157680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.157689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.157696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.157703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd588a0 is same with the state(6) to be set 00:29:15.228 [2024-12-14 03:10:30.158670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.158683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.158696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.158704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.158712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.158719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.158728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.158735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.158744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.158750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.158762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.158769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.158778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.158785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.158793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.158800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.158808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.158815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.158824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.158830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.158839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.158846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.158854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.158861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.158870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.158877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.158886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.158894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.158903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.158910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.158919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.158926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.158934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.158941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.158950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.158957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.158966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.158972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.158981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.158988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.158995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.159002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.159011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.159018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.159027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.228 [2024-12-14 03:10:30.159033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.228 [2024-12-14 03:10:30.159042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.229 [2024-12-14 03:10:30.159677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.229 [2024-12-14 03:10:30.159685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd59c20 is same with the state(6) to be set 00:29:15.230 [2024-12-14 03:10:30.160647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.160662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.160673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.160681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.160689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.160697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.160706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.160712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.160733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.160741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.160750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.160757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.160766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.160772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.160781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.160788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.160796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.160804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.160812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.160819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.160829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.160836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.160844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.160855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.160864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.160870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.160880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.160886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.160895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.160902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.160910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.160917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.160926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.160932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.160942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.160948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.160956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.160963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.160972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.160979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.160989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.160997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.161006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.161012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.161021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.161028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.161036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.161044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.161054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.161061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.161069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.161076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.161084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.161091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.161099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.161106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.161114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.161121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.161129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.161135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.161144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.161150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.161158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.161165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.161172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.161179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.161187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.161193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.161202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.161209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.161217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.161224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.161233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.161243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.161251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.161259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.161268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.161274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.161282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.230 [2024-12-14 03:10:30.161289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.230 [2024-12-14 03:10:30.161297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.161674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.161682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5aef0 is same with the state(6) to be set 00:29:15.231 [2024-12-14 03:10:30.162659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.162673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.162684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.162691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.162701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.162708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.162718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.162725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.162734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.162741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.162749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.162757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.162766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.162772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.162781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.162788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.162796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.162803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.162811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.162821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.162830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.162837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.162845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.162851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.162860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.162867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.162875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.162882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.162891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.162897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.162906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.231 [2024-12-14 03:10:30.162913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.231 [2024-12-14 03:10:30.162923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.162931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.162940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.162947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.162955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.162963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.162971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.162978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.162987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.162993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.232 [2024-12-14 03:10:30.163564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.232 [2024-12-14 03:10:30.163573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.233 [2024-12-14 03:10:30.163580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.233 [2024-12-14 03:10:30.163588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.233 [2024-12-14 03:10:30.163595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.233 [2024-12-14 03:10:30.163604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.233 [2024-12-14 03:10:30.163611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.233 [2024-12-14 03:10:30.163621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.233 [2024-12-14 03:10:30.163628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.233 [2024-12-14 03:10:30.163636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.233 [2024-12-14 03:10:30.163642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.233 [2024-12-14 03:10:30.163651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.233 [2024-12-14 03:10:30.163658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.233 [2024-12-14 03:10:30.163666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.233 [2024-12-14 03:10:30.163673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.233 [2024-12-14 03:10:30.163681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5c270 is same with the state(6) to be set 00:29:15.233 [2024-12-14 03:10:30.164630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:15.233 [2024-12-14 03:10:30.164650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:15.233 [2024-12-14 03:10:30.164663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:15.233 [2024-12-14 03:10:30.164674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:15.233 [2024-12-14 03:10:30.164749] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:15.233 [2024-12-14 03:10:30.164763] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:29:15.233 [2024-12-14 03:10:30.164829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:15.233 task offset: 27648 on job bdev=Nvme6n1 fails 00:29:15.233 00:29:15.233 Latency(us) 00:29:15.233 [2024-12-14T02:10:30.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.233 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.233 Job: Nvme1n1 ended in about 0.81 seconds with error 00:29:15.233 Verification LBA range: start 0x0 length 0x400 00:29:15.233 Nvme1n1 : 0.81 158.90 9.93 79.45 0.00 265291.74 16477.62 217704.35 00:29:15.233 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.233 Job: Nvme2n1 ended in about 0.80 seconds with error 00:29:15.233 Verification LBA range: start 0x0 length 0x400 00:29:15.233 Nvme2n1 : 0.80 251.71 15.73 79.75 0.00 186887.53 5929.45 196732.83 00:29:15.233 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.233 Job: Nvme3n1 ended in about 0.81 seconds with error 00:29:15.233 Verification LBA range: start 0x0 length 0x400 00:29:15.233 Nvme3n1 : 0.81 243.94 15.25 79.25 0.00 187963.49 14293.09 221698.93 00:29:15.233 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.233 Job: Nvme4n1 ended in about 0.80 seconds with error 00:29:15.233 Verification LBA range: start 0x0 length 0x400 00:29:15.233 Nvme4n1 : 0.80 240.53 15.03 10.02 0.00 236447.65 28336.52 211712.49 00:29:15.233 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.233 Job: Nvme5n1 ended in about 0.80 seconds with error 00:29:15.233 Verification LBA range: start 0x0 length 0x400 00:29:15.233 Nvme5n1 : 0.80 241.48 15.09 80.49 0.00 180851.57 16852.11 188743.68 00:29:15.233 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.233 Job: Nvme6n1 ended in about 0.79 seconds with error 00:29:15.233 Verification LBA range: start 0x0 length 0x400 00:29:15.233 Nvme6n1 : 0.79 242.20 15.14 80.73 0.00 176384.49 14355.50 223696.21 00:29:15.233 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.233 Job: Nvme7n1 ended in about 0.81 seconds with error 00:29:15.233 Verification LBA range: start 0x0 length 0x400 00:29:15.233 Nvme7n1 : 0.81 164.29 10.27 79.06 0.00 229710.10 15666.22 225693.50 00:29:15.233 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.233 Job: Nvme8n1 ended in about 0.81 seconds with error 00:29:15.233 Verification LBA range: start 0x0 length 0x400 00:29:15.233 Nvme8n1 : 0.81 236.59 14.79 78.86 0.00 173430.98 14792.41 205720.62 00:29:15.233 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.233 Job: Nvme9n1 ended in about 0.81 seconds with error 00:29:15.233 Verification LBA range: start 0x0 length 0x400 00:29:15.233 Nvme9n1 : 0.81 157.34 9.83 78.67 0.00 226885.24 32705.58 220700.28 00:29:15.233 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.233 Job: Nvme10n1 ended in about 0.82 seconds with error 00:29:15.233 Verification LBA range: start 0x0 length 0x400 00:29:15.233 Nvme10n1 : 0.82 156.95 9.81 78.48 0.00 222546.08 19099.06 234681.30 00:29:15.233 [2024-12-14T02:10:30.366Z] =================================================================================================================== 00:29:15.233 [2024-12-14T02:10:30.366Z] Total : 2093.92 130.87 724.76 0.00 204757.74 5929.45 234681.30 00:29:15.233 [2024-12-14 03:10:30.194639] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:15.233 [2024-12-14 03:10:30.194685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:15.233 [2024-12-14 03:10:30.194959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.233 [2024-12-14 03:10:30.194978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x957140 with addr=10.0.0.2, port=4420 00:29:15.233 [2024-12-14 03:10:30.194991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x957140 is same with the state(6) to be set 00:29:15.233 [2024-12-14 03:10:30.195123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.233 [2024-12-14 03:10:30.195135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94d260 with addr=10.0.0.2, port=4420 00:29:15.233 [2024-12-14 03:10:30.195143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94d260 is same with the state(6) to be set 00:29:15.233 [2024-12-14 03:10:30.195364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.233 [2024-12-14 03:10:30.195377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9c990 with addr=10.0.0.2, port=4420 00:29:15.233 [2024-12-14 03:10:30.195385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c990 is same with the state(6) to be set 00:29:15.233 [2024-12-14 03:10:30.195582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.233 [2024-12-14 03:10:30.195593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94d080 with addr=10.0.0.2, port=4420 00:29:15.233 [2024-12-14 03:10:30.195600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94d080 is same with the state(6) to be set 00:29:15.233 [2024-12-14 03:10:30.196879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:15.233 [2024-12-14 03:10:30.196899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:15.233 [2024-12-14 03:10:30.196907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:15.233 [2024-12-14 03:10:30.196923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:15.233 [2024-12-14 03:10:30.197131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.233 [2024-12-14 03:10:30.197146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9f920 with addr=10.0.0.2, port=4420 00:29:15.233 [2024-12-14 03:10:30.197154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9f920 is same with the state(6) to be set 00:29:15.233 [2024-12-14 03:10:30.197347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.233 [2024-12-14 03:10:30.197360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde2980 with addr=10.0.0.2, port=4420 00:29:15.233 [2024-12-14 03:10:30.197368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2980 is same with the state(6) to be set 00:29:15.233 [2024-12-14 03:10:30.197382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x957140 (9): Bad file descriptor 00:29:15.233 [2024-12-14 03:10:30.197393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94d260 (9): Bad file descriptor 00:29:15.233 [2024-12-14 03:10:30.197401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9c990 (9): Bad file descriptor 00:29:15.233 [2024-12-14 03:10:30.197412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94d080 (9): Bad file descriptor 00:29:15.233 [2024-12-14 03:10:30.197445] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:15.233 [2024-12-14 03:10:30.197458] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:15.234 [2024-12-14 03:10:30.197467] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:15.234 [2024-12-14 03:10:30.197478] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:15.234 [2024-12-14 03:10:30.197889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.234 [2024-12-14 03:10:30.197908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda0d30 with addr=10.0.0.2, port=4420 00:29:15.234 [2024-12-14 03:10:30.197917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda0d30 is same with the state(6) to be set 00:29:15.234 [2024-12-14 03:10:30.198020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.234 [2024-12-14 03:10:30.198032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76340 with addr=10.0.0.2, port=4420 00:29:15.234 [2024-12-14 03:10:30.198040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76340 is same with the state(6) to be set 00:29:15.234 [2024-12-14 03:10:30.198190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.234 [2024-12-14 03:10:30.198201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94ba10 with addr=10.0.0.2, port=4420 00:29:15.234 [2024-12-14 03:10:30.198209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94ba10 is same with the state(6) to be set 00:29:15.234 [2024-12-14 03:10:30.198366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.234 [2024-12-14 03:10:30.198378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x956cd0 with addr=10.0.0.2, port=4420 00:29:15.234 [2024-12-14 03:10:30.198386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x956cd0 is same with the state(6) to be set 00:29:15.234 [2024-12-14 03:10:30.198397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9f920 (9): Bad file descriptor 00:29:15.234 [2024-12-14 03:10:30.198406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde2980 (9): Bad file descriptor 00:29:15.234 [2024-12-14 03:10:30.198418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:15.234 [2024-12-14 03:10:30.198425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:15.234 [2024-12-14 03:10:30.198433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:15.234 [2024-12-14 03:10:30.198441] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:15.234 [2024-12-14 03:10:30.198451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:15.234 [2024-12-14 03:10:30.198458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:15.234 [2024-12-14 03:10:30.198464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:15.234 [2024-12-14 03:10:30.198471] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:15.234 [2024-12-14 03:10:30.198479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:15.234 [2024-12-14 03:10:30.198486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:15.234 [2024-12-14 03:10:30.198493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:15.234 [2024-12-14 03:10:30.198499] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:15.234 [2024-12-14 03:10:30.198505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:15.234 [2024-12-14 03:10:30.198512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:15.234 [2024-12-14 03:10:30.198519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:15.234 [2024-12-14 03:10:30.198525] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:15.234 [2024-12-14 03:10:30.198602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda0d30 (9): Bad file descriptor 00:29:15.234 [2024-12-14 03:10:30.198614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76340 (9): Bad file descriptor 00:29:15.234 [2024-12-14 03:10:30.198622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94ba10 (9): Bad file descriptor 00:29:15.234 [2024-12-14 03:10:30.198631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x956cd0 (9): Bad file descriptor 00:29:15.234 [2024-12-14 03:10:30.198639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:15.234 [2024-12-14 03:10:30.198645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:15.234 [2024-12-14 03:10:30.198651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:15.234 [2024-12-14 03:10:30.198657] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:15.234 [2024-12-14 03:10:30.198664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:15.234 [2024-12-14 03:10:30.198671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:15.234 [2024-12-14 03:10:30.198677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:15.234 [2024-12-14 03:10:30.198683] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:15.234 [2024-12-14 03:10:30.198706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:15.234 [2024-12-14 03:10:30.198716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:15.234 [2024-12-14 03:10:30.198724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:15.234 [2024-12-14 03:10:30.198730] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:15.234 [2024-12-14 03:10:30.198737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:15.234 [2024-12-14 03:10:30.198744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:15.234 [2024-12-14 03:10:30.198752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:15.234 [2024-12-14 03:10:30.198758] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:15.234 [2024-12-14 03:10:30.198764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:15.234 [2024-12-14 03:10:30.198772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:15.234 [2024-12-14 03:10:30.198778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:15.234 [2024-12-14 03:10:30.198784] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:15.234 [2024-12-14 03:10:30.198791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:15.234 [2024-12-14 03:10:30.198797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:15.234 [2024-12-14 03:10:30.198803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:15.234 [2024-12-14 03:10:30.198810] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:15.494 03:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:16.431 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 343916 00:29:16.431 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:16.431 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 343916 00:29:16.431 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:16.431 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:16.431 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:16.431 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:16.431 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 343916 00:29:16.431 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:16.431 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:16.431 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:16.431 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:16.431 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:16.431 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:16.431 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:16.432 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:16.432 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:16.432 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:16.432 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:16.432 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:16.432 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:16.432 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:16.432 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:16.432 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:16.432 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:16.432 rmmod nvme_tcp 00:29:16.432 rmmod nvme_fabrics 00:29:16.690 rmmod nvme_keyring 00:29:16.690 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:16.690 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:16.690 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:16.690 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 343860 ']' 00:29:16.690 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 343860 00:29:16.690 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 343860 ']' 00:29:16.690 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 343860 00:29:16.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (343860) - No such process 00:29:16.690 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 343860 is not found' 00:29:16.690 Process with pid 343860 is not found 00:29:16.690 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:16.690 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:16.690 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:16.690 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:16.690 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:16.690 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:16.690 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:16.690 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:16.690 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:16.691 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.691 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.691 03:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.595 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:18.595 00:29:18.595 real 0m7.060s 00:29:18.595 user 0m16.102s 00:29:18.595 sys 0m1.262s 00:29:18.595 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:18.595 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:18.595 ************************************ 00:29:18.595 END TEST nvmf_shutdown_tc3 00:29:18.595 ************************************ 00:29:18.595 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:18.595 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:18.595 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:18.595 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:18.595 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:18.595 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:18.855 ************************************ 00:29:18.855 START TEST nvmf_shutdown_tc4 00:29:18.855 ************************************ 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:18.855 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:18.855 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:18.855 Found net devices under 0000:af:00.0: cvl_0_0 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:18.855 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:18.856 Found net devices under 0000:af:00.1: cvl_0_1 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:18.856 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:19.115 03:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:19.115 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:19.115 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:19.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:29:19.116 00:29:19.116 --- 10.0.0.2 ping statistics --- 00:29:19.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.116 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:19.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:29:19.116 00:29:19.116 --- 10.0.0.1 ping statistics --- 00:29:19.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.116 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=344120 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 344120 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 344120 ']' 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:19.116 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:19.116 [2024-12-14 03:10:34.126875] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:19.116 [2024-12-14 03:10:34.126916] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.116 [2024-12-14 03:10:34.204247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:19.116 [2024-12-14 03:10:34.226504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.116 [2024-12-14 03:10:34.226539] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.116 [2024-12-14 03:10:34.226546] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:19.116 [2024-12-14 03:10:34.226552] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:19.116 [2024-12-14 03:10:34.226557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.116 [2024-12-14 03:10:34.228058] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:19.116 [2024-12-14 03:10:34.228164] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:19.116 [2024-12-14 03:10:34.228273] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.116 [2024-12-14 03:10:34.228275] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:19.375 [2024-12-14 03:10:34.363189] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.375 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.376 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.376 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.376 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.376 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.376 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.376 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.376 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.376 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.376 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.376 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:19.376 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.376 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:19.376 Malloc1 00:29:19.376 [2024-12-14 03:10:34.468880] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.376 Malloc2 00:29:19.634 Malloc3 00:29:19.634 Malloc4 00:29:19.634 Malloc5 00:29:19.634 Malloc6 00:29:19.634 Malloc7 00:29:19.634 Malloc8 00:29:19.893 Malloc9 00:29:19.893 Malloc10 00:29:19.893 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.893 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:19.893 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:19.893 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:19.893 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=344183 00:29:19.893 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:19.893 03:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:19.893 [2024-12-14 03:10:34.985755] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:25.176 03:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:25.176 03:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 344120 00:29:25.176 03:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 344120 ']' 00:29:25.176 03:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 344120 00:29:25.176 03:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:25.176 03:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:25.176 03:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 344120 00:29:25.176 03:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:25.176 03:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:25.176 03:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 344120' 00:29:25.176 killing process with pid 344120 00:29:25.176 03:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 344120 00:29:25.176 03:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 344120 00:29:25.176 Write completed with error (sct=0, sc=8) 00:29:25.176 [2024-12-14 03:10:39.982704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff0c90 is same with the state(6) to be set 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 [2024-12-14 03:10:39.983101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fefe20 is same with the state(6) to be set 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 [2024-12-14 03:10:39.983127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fefe20 is same with the state(6) to be set 00:29:25.177 [2024-12-14 03:10:39.983135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fefe20 is same with the state(6) to be set 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 [2024-12-14 03:10:39.983141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fefe20 is same with the state(6) to be set 00:29:25.177 starting I/O failed: -6 00:29:25.177 [2024-12-14 03:10:39.983148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fefe20 is same with the state(6) to be set 00:29:25.177 [2024-12-14 03:10:39.983155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fefe20 is same with the state(6) to be set 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 [2024-12-14 03:10:39.983161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fefe20 is same with the state(6) to be set 00:29:25.177 [2024-12-14 03:10:39.983167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fefe20 is same with the state(6) to be set 00:29:25.177 [2024-12-14 03:10:39.983172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fefe20 is same with the state(6) to be set 00:29:25.177 [2024-12-14 03:10:39.983178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fefe20 is same with the state(6) to be set 00:29:25.177 [2024-12-14 03:10:39.983184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fefe20 is same with the state(6) to be set 00:29:25.177 [2024-12-14 03:10:39.983190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fefe20 is same with the state(6) to be set 00:29:25.177 [2024-12-14 03:10:39.983196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fefe20 is same with the state(6) to be set 00:29:25.177 starting I/O failed: -6 00:29:25.177 starting I/O failed: -6 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 [2024-12-14 03:10:39.983998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 starting I/O failed: -6 00:29:25.177 Write completed with error (sct=0, sc=8) 00:29:25.177 [2024-12-14 03:10:39.985082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.177 starting I/O failed: -6 00:29:25.177 starting I/O failed: -6 00:29:25.178 starting I/O failed: -6 00:29:25.178 starting I/O failed: -6 00:29:25.178 starting I/O failed: -6 00:29:25.178 starting I/O failed: -6 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 [2024-12-14 03:10:39.987073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.178 NVMe io qpair process completion error 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 [2024-12-14 03:10:39.988018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 starting I/O failed: -6 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.178 Write completed with error (sct=0, sc=8) 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 [2024-12-14 03:10:39.988859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 [2024-12-14 03:10:39.989831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 [2024-12-14 03:10:39.991346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.179 NVMe io qpair process completion error 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 starting I/O failed: -6 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.179 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 [2024-12-14 03:10:39.992401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 [2024-12-14 03:10:39.993252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 [2024-12-14 03:10:39.994222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.180 Write completed with error (sct=0, sc=8) 00:29:25.180 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 [2024-12-14 03:10:39.996011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.181 NVMe io qpair process completion error 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 [2024-12-14 03:10:39.997124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.181 Write completed with error (sct=0, sc=8) 00:29:25.181 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 [2024-12-14 03:10:39.998029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 [2024-12-14 03:10:39.998995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.182 starting I/O failed: -6 00:29:25.182 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 [2024-12-14 03:10:40.001028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.183 NVMe io qpair process completion error 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 [2024-12-14 03:10:40.002049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 [2024-12-14 03:10:40.002957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 [2024-12-14 03:10:40.003999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.183 Write completed with error (sct=0, sc=8) 00:29:25.183 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 [2024-12-14 03:10:40.007905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.184 NVMe io qpair process completion error 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 [2024-12-14 03:10:40.008970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 [2024-12-14 03:10:40.009752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.184 Write completed with error (sct=0, sc=8) 00:29:25.184 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 [2024-12-14 03:10:40.010818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 [2024-12-14 03:10:40.013893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.185 NVMe io qpair process completion error 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 starting I/O failed: -6 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.185 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 [2024-12-14 03:10:40.014912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 [2024-12-14 03:10:40.015838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 [2024-12-14 03:10:40.017187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.186 Write completed with error (sct=0, sc=8) 00:29:25.186 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 [2024-12-14 03:10:40.020042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.187 NVMe io qpair process completion error 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 [2024-12-14 03:10:40.021112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 starting I/O failed: -6 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.187 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 [2024-12-14 03:10:40.022033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 [2024-12-14 03:10:40.023007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 [2024-12-14 03:10:40.024980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.188 NVMe io qpair process completion error 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 Write completed with error (sct=0, sc=8) 00:29:25.188 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 [2024-12-14 03:10:40.026054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 [2024-12-14 03:10:40.026940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 [2024-12-14 03:10:40.027937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.189 Write completed with error (sct=0, sc=8) 00:29:25.189 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 [2024-12-14 03:10:40.034383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.190 NVMe io qpair process completion error 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 [2024-12-14 03:10:40.035400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 [2024-12-14 03:10:40.036235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.190 Write completed with error (sct=0, sc=8) 00:29:25.190 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 [2024-12-14 03:10:40.037284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 Write completed with error (sct=0, sc=8) 00:29:25.191 starting I/O failed: -6 00:29:25.191 [2024-12-14 03:10:40.039972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.191 NVMe io qpair process completion error 00:29:25.191 Initializing NVMe Controllers 00:29:25.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:25.191 Controller IO queue size 128, less than required. 00:29:25.191 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:25.191 Controller IO queue size 128, less than required. 00:29:25.191 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:25.192 Controller IO queue size 128, less than required. 00:29:25.192 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:25.192 Controller IO queue size 128, less than required. 00:29:25.192 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:25.192 Controller IO queue size 128, less than required. 00:29:25.192 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:25.192 Controller IO queue size 128, less than required. 00:29:25.192 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:25.192 Controller IO queue size 128, less than required. 00:29:25.192 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:25.192 Controller IO queue size 128, less than required. 00:29:25.192 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:25.192 Controller IO queue size 128, less than required. 00:29:25.192 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:25.192 Controller IO queue size 128, less than required. 00:29:25.192 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:25.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:25.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:25.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:25.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:25.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:25.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:25.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:25.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:25.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:25.192 Initialization complete. Launching workers. 00:29:25.192 ======================================================== 00:29:25.192 Latency(us) 00:29:25.192 Device Information : IOPS MiB/s Average min max 00:29:25.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2177.78 93.58 58778.68 835.90 105814.27 00:29:25.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2217.84 95.30 57726.93 887.72 124570.91 00:29:25.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2224.91 95.60 57557.94 844.76 123092.67 00:29:25.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2239.05 96.21 57211.51 750.18 102177.99 00:29:25.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2240.76 96.28 57207.51 773.72 106949.76 00:29:25.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2205.63 94.77 58154.09 867.31 110297.37 00:29:25.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2198.56 94.47 58366.65 946.86 114317.79 00:29:25.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2165.14 93.03 59285.92 718.27 117047.32 00:29:25.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2242.90 96.37 57300.61 726.19 123757.02 00:29:25.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2255.12 96.90 56313.62 478.86 95982.54 00:29:25.192 ======================================================== 00:29:25.192 Total : 22167.70 952.52 57780.04 478.86 124570.91 00:29:25.192 00:29:25.192 [2024-12-14 03:10:40.042971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf8b30 is same with the state(6) to be set 00:29:25.192 [2024-12-14 03:10:40.043017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3190 is same with the state(6) to be set 00:29:25.192 [2024-12-14 03:10:40.043046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf4ff0 is same with the state(6) to be set 00:29:25.192 [2024-12-14 03:10:40.043080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3880 is same with the state(6) to be set 00:29:25.192 [2024-12-14 03:10:40.043107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3370 is same with the state(6) to be set 00:29:25.192 [2024-12-14 03:10:40.043134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf4cc0 is same with the state(6) to be set 00:29:25.192 [2024-12-14 03:10:40.043161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3550 is same with the state(6) to be set 00:29:25.192 [2024-12-14 03:10:40.043187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf5320 is same with the state(6) to be set 00:29:25.192 [2024-12-14 03:10:40.043214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf5650 is same with the state(6) to be set 00:29:25.192 [2024-12-14 03:10:40.043242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2fb0 is same with the state(6) to be set 00:29:25.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:25.451 03:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 344183 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 344183 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 344183 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:26.387 rmmod nvme_tcp 00:29:26.387 rmmod nvme_fabrics 00:29:26.387 rmmod nvme_keyring 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 344120 ']' 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 344120 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 344120 ']' 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 344120 00:29:26.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (344120) - No such process 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 344120 is not found' 00:29:26.387 Process with pid 344120 is not found 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.387 03:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.922 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:28.922 00:29:28.922 real 0m9.762s 00:29:28.922 user 0m25.026s 00:29:28.922 sys 0m5.017s 00:29:28.922 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:28.922 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:28.922 ************************************ 00:29:28.922 END TEST nvmf_shutdown_tc4 00:29:28.922 ************************************ 00:29:28.922 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:28.922 00:29:28.922 real 0m39.746s 00:29:28.922 user 1m36.585s 00:29:28.922 sys 0m13.580s 00:29:28.922 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:28.922 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:28.922 ************************************ 00:29:28.922 END TEST nvmf_shutdown 00:29:28.922 ************************************ 00:29:28.922 03:10:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:28.922 03:10:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:28.922 03:10:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:28.922 03:10:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:28.922 ************************************ 00:29:28.922 START TEST nvmf_nsid 00:29:28.922 ************************************ 00:29:28.922 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:28.922 * Looking for test storage... 00:29:28.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:28.922 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:28.922 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:29:28.922 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:28.922 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:28.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.923 --rc genhtml_branch_coverage=1 00:29:28.923 --rc genhtml_function_coverage=1 00:29:28.923 --rc genhtml_legend=1 00:29:28.923 --rc geninfo_all_blocks=1 00:29:28.923 --rc geninfo_unexecuted_blocks=1 00:29:28.923 00:29:28.923 ' 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:28.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.923 --rc genhtml_branch_coverage=1 00:29:28.923 --rc genhtml_function_coverage=1 00:29:28.923 --rc genhtml_legend=1 00:29:28.923 --rc geninfo_all_blocks=1 00:29:28.923 --rc geninfo_unexecuted_blocks=1 00:29:28.923 00:29:28.923 ' 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:28.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.923 --rc genhtml_branch_coverage=1 00:29:28.923 --rc genhtml_function_coverage=1 00:29:28.923 --rc genhtml_legend=1 00:29:28.923 --rc geninfo_all_blocks=1 00:29:28.923 --rc geninfo_unexecuted_blocks=1 00:29:28.923 00:29:28.923 ' 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:28.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.923 --rc genhtml_branch_coverage=1 00:29:28.923 --rc genhtml_function_coverage=1 00:29:28.923 --rc genhtml_legend=1 00:29:28.923 --rc geninfo_all_blocks=1 00:29:28.923 --rc geninfo_unexecuted_blocks=1 00:29:28.923 00:29:28.923 ' 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:28.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.923 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.924 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:28.924 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:28.924 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:28.924 03:10:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:34.195 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:34.195 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:34.195 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:34.195 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:34.195 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:34.195 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:34.455 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:34.455 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.455 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:34.456 Found net devices under 0000:af:00.0: cvl_0_0 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:34.456 Found net devices under 0000:af:00.1: cvl_0_1 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:34.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:34.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:29:34.456 00:29:34.456 --- 10.0.0.2 ping statistics --- 00:29:34.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.456 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:34.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:34.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:29:34.456 00:29:34.456 --- 10.0.0.1 ping statistics --- 00:29:34.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.456 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:34.456 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:34.715 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:34.715 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:34.715 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:34.715 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:34.716 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=346507 00:29:34.716 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:34.716 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 346507 00:29:34.716 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 346507 ']' 00:29:34.716 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.716 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.716 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.716 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.716 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:34.716 [2024-12-14 03:10:49.677079] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:34.716 [2024-12-14 03:10:49.677121] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.716 [2024-12-14 03:10:49.752006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.716 [2024-12-14 03:10:49.773526] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.716 [2024-12-14 03:10:49.773563] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.716 [2024-12-14 03:10:49.773572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:34.716 [2024-12-14 03:10:49.773579] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:34.716 [2024-12-14 03:10:49.773584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.716 [2024-12-14 03:10:49.774067] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.975 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.975 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:34.975 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:34.975 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:34.975 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:34.975 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.975 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:34.975 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=346528 00:29:34.975 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:34.975 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:29:34.975 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:34.975 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:34.975 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:34.975 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:34.975 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.975 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.975 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:34.976 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.976 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:34.976 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:34.976 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:34.976 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:29:34.976 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:34.976 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=34bacb65-2558-4f41-b0f6-9a1033a56fc1 00:29:34.976 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:34.976 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=dd994413-5ed9-4b1f-8850-0dd007b194f2 00:29:34.976 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:34.976 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=1315adc2-0893-4b0d-8a01-06a6495a4f94 00:29:34.976 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:34.976 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.976 03:10:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:34.976 null0 00:29:34.976 null1 00:29:34.976 [2024-12-14 03:10:49.962605] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:34.976 [2024-12-14 03:10:49.962648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid346528 ] 00:29:34.976 null2 00:29:34.976 [2024-12-14 03:10:49.968802] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.976 [2024-12-14 03:10:49.992987] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.976 03:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.976 03:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 346528 /var/tmp/tgt2.sock 00:29:34.976 03:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 346528 ']' 00:29:34.976 03:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:34.976 03:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.976 03:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:34.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:34.976 03:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.976 03:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:34.976 [2024-12-14 03:10:50.035435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.976 [2024-12-14 03:10:50.060335] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.235 03:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.235 03:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:35.235 03:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:29:35.494 [2024-12-14 03:10:50.566944] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.494 [2024-12-14 03:10:50.583043] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:29:35.494 nvme0n1 nvme0n2 00:29:35.494 nvme1n1 00:29:35.752 03:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:29:35.752 03:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:29:35.752 03:10:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:36.688 03:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:29:36.688 03:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:29:36.688 03:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:29:36.688 03:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:29:36.688 03:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:29:36.688 03:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:29:36.688 03:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:29:36.688 03:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:36.688 03:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:36.688 03:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:36.688 03:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:29:36.688 03:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:29:36.688 03:10:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:29:37.624 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:37.624 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:37.624 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:37.624 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:29:37.624 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:37.624 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 34bacb65-2558-4f41-b0f6-9a1033a56fc1 00:29:37.624 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:37.624 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:29:37.624 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:29:37.624 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:29:37.624 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=34bacb6525584f41b0f69a1033a56fc1 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 34BACB6525584F41B0F69A1033A56FC1 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 34BACB6525584F41B0F69A1033A56FC1 == \3\4\B\A\C\B\6\5\2\5\5\8\4\F\4\1\B\0\F\6\9\A\1\0\3\3\A\5\6\F\C\1 ]] 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid dd994413-5ed9-4b1f-8850-0dd007b194f2 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=dd9944135ed94b1f88500dd007b194f2 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DD9944135ED94B1F88500DD007B194F2 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ DD9944135ED94B1F88500DD007B194F2 == \D\D\9\9\4\4\1\3\5\E\D\9\4\B\1\F\8\8\5\0\0\D\D\0\0\7\B\1\9\4\F\2 ]] 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 1315adc2-0893-4b0d-8a01-06a6495a4f94 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1315adc208934b0d8a0106a6495a4f94 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1315ADC208934B0D8A0106A6495A4F94 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 1315ADC208934B0D8A0106A6495A4F94 == \1\3\1\5\A\D\C\2\0\8\9\3\4\B\0\D\8\A\0\1\0\6\A\6\4\9\5\A\4\F\9\4 ]] 00:29:37.883 03:10:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:29:38.142 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:38.142 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:29:38.142 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 346528 00:29:38.142 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 346528 ']' 00:29:38.142 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 346528 00:29:38.142 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:38.142 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:38.142 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 346528 00:29:38.142 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:38.142 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:38.142 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 346528' 00:29:38.142 killing process with pid 346528 00:29:38.142 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 346528 00:29:38.142 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 346528 00:29:38.401 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:29:38.401 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:38.401 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:29:38.401 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:38.401 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:29:38.401 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:38.401 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:38.401 rmmod nvme_tcp 00:29:38.401 rmmod nvme_fabrics 00:29:38.660 rmmod nvme_keyring 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 346507 ']' 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 346507 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 346507 ']' 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 346507 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 346507 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 346507' 00:29:38.660 killing process with pid 346507 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 346507 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 346507 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.660 03:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.194 03:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:41.194 00:29:41.194 real 0m12.222s 00:29:41.194 user 0m9.612s 00:29:41.194 sys 0m5.370s 00:29:41.194 03:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:41.194 03:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:41.194 ************************************ 00:29:41.194 END TEST nvmf_nsid 00:29:41.194 ************************************ 00:29:41.194 03:10:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:41.194 00:29:41.194 real 18m36.942s 00:29:41.194 user 49m19.614s 00:29:41.194 sys 4m38.912s 00:29:41.194 03:10:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:41.194 03:10:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:41.194 ************************************ 00:29:41.194 END TEST nvmf_target_extra 00:29:41.194 ************************************ 00:29:41.194 03:10:55 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:41.194 03:10:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:41.194 03:10:55 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:41.194 03:10:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:41.194 ************************************ 00:29:41.194 START TEST nvmf_host 00:29:41.194 ************************************ 00:29:41.194 03:10:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:41.194 * Looking for test storage... 00:29:41.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:41.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.194 --rc genhtml_branch_coverage=1 00:29:41.194 --rc genhtml_function_coverage=1 00:29:41.194 --rc genhtml_legend=1 00:29:41.194 --rc geninfo_all_blocks=1 00:29:41.194 --rc geninfo_unexecuted_blocks=1 00:29:41.194 00:29:41.194 ' 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:41.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.194 --rc genhtml_branch_coverage=1 00:29:41.194 --rc genhtml_function_coverage=1 00:29:41.194 --rc genhtml_legend=1 00:29:41.194 --rc geninfo_all_blocks=1 00:29:41.194 --rc geninfo_unexecuted_blocks=1 00:29:41.194 00:29:41.194 ' 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:41.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.194 --rc genhtml_branch_coverage=1 00:29:41.194 --rc genhtml_function_coverage=1 00:29:41.194 --rc genhtml_legend=1 00:29:41.194 --rc geninfo_all_blocks=1 00:29:41.194 --rc geninfo_unexecuted_blocks=1 00:29:41.194 00:29:41.194 ' 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:41.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.194 --rc genhtml_branch_coverage=1 00:29:41.194 --rc genhtml_function_coverage=1 00:29:41.194 --rc genhtml_legend=1 00:29:41.194 --rc geninfo_all_blocks=1 00:29:41.194 --rc geninfo_unexecuted_blocks=1 00:29:41.194 00:29:41.194 ' 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:41.194 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:41.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.195 ************************************ 00:29:41.195 START TEST nvmf_multicontroller 00:29:41.195 ************************************ 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:41.195 * Looking for test storage... 00:29:41.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:29:41.195 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:41.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.455 --rc genhtml_branch_coverage=1 00:29:41.455 --rc genhtml_function_coverage=1 00:29:41.455 --rc genhtml_legend=1 00:29:41.455 --rc geninfo_all_blocks=1 00:29:41.455 --rc geninfo_unexecuted_blocks=1 00:29:41.455 00:29:41.455 ' 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:41.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.455 --rc genhtml_branch_coverage=1 00:29:41.455 --rc genhtml_function_coverage=1 00:29:41.455 --rc genhtml_legend=1 00:29:41.455 --rc geninfo_all_blocks=1 00:29:41.455 --rc geninfo_unexecuted_blocks=1 00:29:41.455 00:29:41.455 ' 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:41.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.455 --rc genhtml_branch_coverage=1 00:29:41.455 --rc genhtml_function_coverage=1 00:29:41.455 --rc genhtml_legend=1 00:29:41.455 --rc geninfo_all_blocks=1 00:29:41.455 --rc geninfo_unexecuted_blocks=1 00:29:41.455 00:29:41.455 ' 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:41.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.455 --rc genhtml_branch_coverage=1 00:29:41.455 --rc genhtml_function_coverage=1 00:29:41.455 --rc genhtml_legend=1 00:29:41.455 --rc geninfo_all_blocks=1 00:29:41.455 --rc geninfo_unexecuted_blocks=1 00:29:41.455 00:29:41.455 ' 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.455 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:41.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:41.456 03:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:48.027 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:48.027 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:48.027 Found net devices under 0000:af:00.0: cvl_0_0 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:48.027 Found net devices under 0000:af:00.1: cvl_0_1 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.027 03:11:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.027 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.027 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.027 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:48.027 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:48.027 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:48.027 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:48.027 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:48.027 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:48.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:29:48.027 00:29:48.027 --- 10.0.0.2 ping statistics --- 00:29:48.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.028 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:48.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:29:48.028 00:29:48.028 --- 10.0.0.1 ping statistics --- 00:29:48.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.028 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=348910 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 348910 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 348910 ']' 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.028 [2024-12-14 03:11:02.391356] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:48.028 [2024-12-14 03:11:02.391399] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.028 [2024-12-14 03:11:02.468074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:48.028 [2024-12-14 03:11:02.490223] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.028 [2024-12-14 03:11:02.490258] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.028 [2024-12-14 03:11:02.490265] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.028 [2024-12-14 03:11:02.490270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.028 [2024-12-14 03:11:02.490276] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.028 [2024-12-14 03:11:02.491595] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:48.028 [2024-12-14 03:11:02.491714] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.028 [2024-12-14 03:11:02.491716] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.028 [2024-12-14 03:11:02.617866] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.028 Malloc0 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.028 [2024-12-14 03:11:02.670601] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.028 [2024-12-14 03:11:02.678553] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.028 Malloc1 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=348937 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 348937 /var/tmp/bdevperf.sock 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 348937 ']' 00:29:48.028 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:48.029 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.029 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:48.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:48.029 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:48.029 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.029 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.029 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:48.029 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:48.029 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:48.029 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.029 03:11:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.029 NVMe0n1 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.029 1 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.029 request: 00:29:48.029 { 00:29:48.029 "name": "NVMe0", 00:29:48.029 "trtype": "tcp", 00:29:48.029 "traddr": "10.0.0.2", 00:29:48.029 "adrfam": "ipv4", 00:29:48.029 "trsvcid": "4420", 00:29:48.029 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.029 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:48.029 "hostaddr": "10.0.0.1", 00:29:48.029 "prchk_reftag": false, 00:29:48.029 "prchk_guard": false, 00:29:48.029 "hdgst": false, 00:29:48.029 "ddgst": false, 00:29:48.029 "allow_unrecognized_csi": false, 00:29:48.029 "method": "bdev_nvme_attach_controller", 00:29:48.029 "req_id": 1 00:29:48.029 } 00:29:48.029 Got JSON-RPC error response 00:29:48.029 response: 00:29:48.029 { 00:29:48.029 "code": -114, 00:29:48.029 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:48.029 } 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.029 request: 00:29:48.029 { 00:29:48.029 "name": "NVMe0", 00:29:48.029 "trtype": "tcp", 00:29:48.029 "traddr": "10.0.0.2", 00:29:48.029 "adrfam": "ipv4", 00:29:48.029 "trsvcid": "4420", 00:29:48.029 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:48.029 "hostaddr": "10.0.0.1", 00:29:48.029 "prchk_reftag": false, 00:29:48.029 "prchk_guard": false, 00:29:48.029 "hdgst": false, 00:29:48.029 "ddgst": false, 00:29:48.029 "allow_unrecognized_csi": false, 00:29:48.029 "method": "bdev_nvme_attach_controller", 00:29:48.029 "req_id": 1 00:29:48.029 } 00:29:48.029 Got JSON-RPC error response 00:29:48.029 response: 00:29:48.029 { 00:29:48.029 "code": -114, 00:29:48.029 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:48.029 } 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.029 request: 00:29:48.029 { 00:29:48.029 "name": "NVMe0", 00:29:48.029 "trtype": "tcp", 00:29:48.029 "traddr": "10.0.0.2", 00:29:48.029 "adrfam": "ipv4", 00:29:48.029 "trsvcid": "4420", 00:29:48.029 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.029 "hostaddr": "10.0.0.1", 00:29:48.029 "prchk_reftag": false, 00:29:48.029 "prchk_guard": false, 00:29:48.029 "hdgst": false, 00:29:48.029 "ddgst": false, 00:29:48.029 "multipath": "disable", 00:29:48.029 "allow_unrecognized_csi": false, 00:29:48.029 "method": "bdev_nvme_attach_controller", 00:29:48.029 "req_id": 1 00:29:48.029 } 00:29:48.029 Got JSON-RPC error response 00:29:48.029 response: 00:29:48.029 { 00:29:48.029 "code": -114, 00:29:48.029 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:48.029 } 00:29:48.029 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:48.030 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:48.030 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:48.030 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:48.030 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:48.030 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:48.030 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:48.030 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:48.030 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:48.030 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:48.030 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:48.030 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:48.030 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:48.030 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.030 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.030 request: 00:29:48.030 { 00:29:48.030 "name": "NVMe0", 00:29:48.030 "trtype": "tcp", 00:29:48.030 "traddr": "10.0.0.2", 00:29:48.030 "adrfam": "ipv4", 00:29:48.030 "trsvcid": "4420", 00:29:48.030 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.030 "hostaddr": "10.0.0.1", 00:29:48.030 "prchk_reftag": false, 00:29:48.030 "prchk_guard": false, 00:29:48.030 "hdgst": false, 00:29:48.030 "ddgst": false, 00:29:48.030 "multipath": "failover", 00:29:48.030 "allow_unrecognized_csi": false, 00:29:48.030 "method": "bdev_nvme_attach_controller", 00:29:48.030 "req_id": 1 00:29:48.030 } 00:29:48.030 Got JSON-RPC error response 00:29:48.030 response: 00:29:48.030 { 00:29:48.030 "code": -114, 00:29:48.288 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:48.288 } 00:29:48.288 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:48.288 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:48.288 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:48.288 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:48.288 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:48.288 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:48.288 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.289 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.289 NVMe0n1 00:29:48.289 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.289 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:48.289 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.289 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.289 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.289 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:48.289 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.289 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.547 00:29:48.547 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.547 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:48.547 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:48.547 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.547 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.547 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.547 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:48.547 03:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:49.483 { 00:29:49.483 "results": [ 00:29:49.483 { 00:29:49.483 "job": "NVMe0n1", 00:29:49.483 "core_mask": "0x1", 00:29:49.483 "workload": "write", 00:29:49.483 "status": "finished", 00:29:49.483 "queue_depth": 128, 00:29:49.483 "io_size": 4096, 00:29:49.483 "runtime": 1.003838, 00:29:49.483 "iops": 25151.468663270367, 00:29:49.483 "mibps": 98.24792446589987, 00:29:49.483 "io_failed": 0, 00:29:49.483 "io_timeout": 0, 00:29:49.483 "avg_latency_us": 5079.6039447160365, 00:29:49.483 "min_latency_us": 1466.7580952380952, 00:29:49.483 "max_latency_us": 8800.548571428571 00:29:49.483 } 00:29:49.483 ], 00:29:49.483 "core_count": 1 00:29:49.483 } 00:29:49.483 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:49.483 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.483 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:49.483 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.483 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:49.483 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 348937 00:29:49.483 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 348937 ']' 00:29:49.483 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 348937 00:29:49.483 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:49.483 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:49.483 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 348937 00:29:49.742 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:49.742 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:49.742 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 348937' 00:29:49.742 killing process with pid 348937 00:29:49.742 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 348937 00:29:49.742 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 348937 00:29:49.742 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:49.742 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.743 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:49.743 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.743 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:49.743 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.743 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:49.743 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.743 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:49.743 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:49.743 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:49.743 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:49.743 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:49.743 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:49.743 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:49.743 [2024-12-14 03:11:02.776028] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:49.743 [2024-12-14 03:11:02.776077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid348937 ] 00:29:49.743 [2024-12-14 03:11:02.848847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.743 [2024-12-14 03:11:02.871117] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.743 [2024-12-14 03:11:03.433731] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name 678c59fd-9231-4269-94a5-0aa85fe86097 already exists 00:29:49.743 [2024-12-14 03:11:03.433758] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:678c59fd-9231-4269-94a5-0aa85fe86097 alias for bdev NVMe1n1 00:29:49.743 [2024-12-14 03:11:03.433766] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:49.743 Running I/O for 1 seconds... 00:29:49.743 25086.00 IOPS, 97.99 MiB/s 00:29:49.743 Latency(us) 00:29:49.743 [2024-12-14T02:11:04.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.743 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:49.743 NVMe0n1 : 1.00 25151.47 98.25 0.00 0.00 5079.60 1466.76 8800.55 00:29:49.743 [2024-12-14T02:11:04.876Z] =================================================================================================================== 00:29:49.743 [2024-12-14T02:11:04.876Z] Total : 25151.47 98.25 0.00 0.00 5079.60 1466.76 8800.55 00:29:49.743 Received shutdown signal, test time was about 1.000000 seconds 00:29:49.743 00:29:49.743 Latency(us) 00:29:49.743 [2024-12-14T02:11:04.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.743 [2024-12-14T02:11:04.876Z] =================================================================================================================== 00:29:49.743 [2024-12-14T02:11:04.876Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:49.743 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:49.743 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:49.743 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:49.743 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:49.743 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:49.743 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:49.743 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:49.743 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:49.743 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:49.743 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:49.743 rmmod nvme_tcp 00:29:49.743 rmmod nvme_fabrics 00:29:49.743 rmmod nvme_keyring 00:29:49.743 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:50.002 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:50.002 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:50.002 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 348910 ']' 00:29:50.002 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 348910 00:29:50.002 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 348910 ']' 00:29:50.002 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 348910 00:29:50.002 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:50.002 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:50.002 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 348910 00:29:50.002 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:50.002 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:50.002 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 348910' 00:29:50.002 killing process with pid 348910 00:29:50.002 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 348910 00:29:50.002 03:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 348910 00:29:50.002 03:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:50.002 03:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:50.002 03:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:50.002 03:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:50.002 03:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:50.002 03:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:50.002 03:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:50.002 03:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:50.002 03:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:50.002 03:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.002 03:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.002 03:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.535 03:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:52.535 00:29:52.535 real 0m10.994s 00:29:52.535 user 0m11.773s 00:29:52.535 sys 0m5.052s 00:29:52.535 03:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:52.535 03:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:52.535 ************************************ 00:29:52.535 END TEST nvmf_multicontroller 00:29:52.535 ************************************ 00:29:52.535 03:11:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:52.535 03:11:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:52.535 03:11:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:52.535 03:11:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.535 ************************************ 00:29:52.535 START TEST nvmf_aer 00:29:52.535 ************************************ 00:29:52.535 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:52.535 * Looking for test storage... 00:29:52.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:52.535 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:52.535 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:29:52.535 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:52.535 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:52.535 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:52.535 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:52.535 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:52.535 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:52.535 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:52.535 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:52.535 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:52.535 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:52.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.536 --rc genhtml_branch_coverage=1 00:29:52.536 --rc genhtml_function_coverage=1 00:29:52.536 --rc genhtml_legend=1 00:29:52.536 --rc geninfo_all_blocks=1 00:29:52.536 --rc geninfo_unexecuted_blocks=1 00:29:52.536 00:29:52.536 ' 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:52.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.536 --rc genhtml_branch_coverage=1 00:29:52.536 --rc genhtml_function_coverage=1 00:29:52.536 --rc genhtml_legend=1 00:29:52.536 --rc geninfo_all_blocks=1 00:29:52.536 --rc geninfo_unexecuted_blocks=1 00:29:52.536 00:29:52.536 ' 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:52.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.536 --rc genhtml_branch_coverage=1 00:29:52.536 --rc genhtml_function_coverage=1 00:29:52.536 --rc genhtml_legend=1 00:29:52.536 --rc geninfo_all_blocks=1 00:29:52.536 --rc geninfo_unexecuted_blocks=1 00:29:52.536 00:29:52.536 ' 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:52.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.536 --rc genhtml_branch_coverage=1 00:29:52.536 --rc genhtml_function_coverage=1 00:29:52.536 --rc genhtml_legend=1 00:29:52.536 --rc geninfo_all_blocks=1 00:29:52.536 --rc geninfo_unexecuted_blocks=1 00:29:52.536 00:29:52.536 ' 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:52.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:52.536 03:11:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:59.106 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:59.106 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:59.106 Found net devices under 0000:af:00.0: cvl_0_0 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:59.106 Found net devices under 0000:af:00.1: cvl_0_1 00:29:59.106 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.107 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:59.107 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:59.107 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:59.107 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:59.107 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:59.107 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:59.107 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.107 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.107 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.107 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:59.107 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.107 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.107 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:59.107 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:59.107 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.107 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.107 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:59.107 03:11:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:59.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:29:59.107 00:29:59.107 --- 10.0.0.2 ping statistics --- 00:29:59.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.107 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:29:59.107 00:29:59.107 --- 10.0.0.1 ping statistics --- 00:29:59.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.107 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=351213 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 351213 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 351213 ']' 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.107 [2024-12-14 03:11:13.364802] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:59.107 [2024-12-14 03:11:13.364845] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.107 [2024-12-14 03:11:13.439893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:59.107 [2024-12-14 03:11:13.462680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.107 [2024-12-14 03:11:13.462716] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.107 [2024-12-14 03:11:13.462723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.107 [2024-12-14 03:11:13.462728] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.107 [2024-12-14 03:11:13.462733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.107 [2024-12-14 03:11:13.464120] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.107 [2024-12-14 03:11:13.464146] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:59.107 [2024-12-14 03:11:13.464216] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:59.107 [2024-12-14 03:11:13.464218] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.107 [2024-12-14 03:11:13.595692] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.107 Malloc0 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.107 [2024-12-14 03:11:13.656851] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.107 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.108 [ 00:29:59.108 { 00:29:59.108 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:59.108 "subtype": "Discovery", 00:29:59.108 "listen_addresses": [], 00:29:59.108 "allow_any_host": true, 00:29:59.108 "hosts": [] 00:29:59.108 }, 00:29:59.108 { 00:29:59.108 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:59.108 "subtype": "NVMe", 00:29:59.108 "listen_addresses": [ 00:29:59.108 { 00:29:59.108 "trtype": "TCP", 00:29:59.108 "adrfam": "IPv4", 00:29:59.108 "traddr": "10.0.0.2", 00:29:59.108 "trsvcid": "4420" 00:29:59.108 } 00:29:59.108 ], 00:29:59.108 "allow_any_host": true, 00:29:59.108 "hosts": [], 00:29:59.108 "serial_number": "SPDK00000000000001", 00:29:59.108 "model_number": "SPDK bdev Controller", 00:29:59.108 "max_namespaces": 2, 00:29:59.108 "min_cntlid": 1, 00:29:59.108 "max_cntlid": 65519, 00:29:59.108 "namespaces": [ 00:29:59.108 { 00:29:59.108 "nsid": 1, 00:29:59.108 "bdev_name": "Malloc0", 00:29:59.108 "name": "Malloc0", 00:29:59.108 "nguid": "1216EC0D4841427BBA2D3B5EC1BB5F93", 00:29:59.108 "uuid": "1216ec0d-4841-427b-ba2d-3b5ec1bb5f93" 00:29:59.108 } 00:29:59.108 ] 00:29:59.108 } 00:29:59.108 ] 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=351236 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.108 Malloc1 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.108 Asynchronous Event Request test 00:29:59.108 Attaching to 10.0.0.2 00:29:59.108 Attached to 10.0.0.2 00:29:59.108 Registering asynchronous event callbacks... 00:29:59.108 Starting namespace attribute notice tests for all controllers... 00:29:59.108 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:59.108 aer_cb - Changed Namespace 00:29:59.108 Cleaning up... 00:29:59.108 [ 00:29:59.108 { 00:29:59.108 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:59.108 "subtype": "Discovery", 00:29:59.108 "listen_addresses": [], 00:29:59.108 "allow_any_host": true, 00:29:59.108 "hosts": [] 00:29:59.108 }, 00:29:59.108 { 00:29:59.108 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:59.108 "subtype": "NVMe", 00:29:59.108 "listen_addresses": [ 00:29:59.108 { 00:29:59.108 "trtype": "TCP", 00:29:59.108 "adrfam": "IPv4", 00:29:59.108 "traddr": "10.0.0.2", 00:29:59.108 "trsvcid": "4420" 00:29:59.108 } 00:29:59.108 ], 00:29:59.108 "allow_any_host": true, 00:29:59.108 "hosts": [], 00:29:59.108 "serial_number": "SPDK00000000000001", 00:29:59.108 "model_number": "SPDK bdev Controller", 00:29:59.108 "max_namespaces": 2, 00:29:59.108 "min_cntlid": 1, 00:29:59.108 "max_cntlid": 65519, 00:29:59.108 "namespaces": [ 00:29:59.108 { 00:29:59.108 "nsid": 1, 00:29:59.108 "bdev_name": "Malloc0", 00:29:59.108 "name": "Malloc0", 00:29:59.108 "nguid": "1216EC0D4841427BBA2D3B5EC1BB5F93", 00:29:59.108 "uuid": "1216ec0d-4841-427b-ba2d-3b5ec1bb5f93" 00:29:59.108 }, 00:29:59.108 { 00:29:59.108 "nsid": 2, 00:29:59.108 "bdev_name": "Malloc1", 00:29:59.108 "name": "Malloc1", 00:29:59.108 "nguid": "A9AF7B017B1B4022AF5191D54E4F03D9", 00:29:59.108 "uuid": "a9af7b01-7b1b-4022-af51-91d54e4f03d9" 00:29:59.108 } 00:29:59.108 ] 00:29:59.108 } 00:29:59.108 ] 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 351236 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.108 03:11:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:59.108 rmmod nvme_tcp 00:29:59.108 rmmod nvme_fabrics 00:29:59.108 rmmod nvme_keyring 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 351213 ']' 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 351213 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 351213 ']' 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 351213 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351213 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351213' 00:29:59.108 killing process with pid 351213 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 351213 00:29:59.108 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 351213 00:29:59.368 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:59.368 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:59.368 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:59.368 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:59.368 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:59.368 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:59.368 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:59.368 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:59.368 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:59.368 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.368 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.368 03:11:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.270 03:11:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:01.270 00:30:01.270 real 0m9.090s 00:30:01.270 user 0m5.009s 00:30:01.270 sys 0m4.767s 00:30:01.270 03:11:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:01.270 03:11:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:01.270 ************************************ 00:30:01.270 END TEST nvmf_aer 00:30:01.270 ************************************ 00:30:01.270 03:11:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:01.270 03:11:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:01.270 03:11:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:01.270 03:11:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.530 ************************************ 00:30:01.530 START TEST nvmf_async_init 00:30:01.530 ************************************ 00:30:01.530 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:01.530 * Looking for test storage... 00:30:01.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:01.530 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:01.530 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:30:01.530 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:01.530 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:01.530 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:01.530 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:01.530 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:01.530 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:30:01.530 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:30:01.530 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:30:01.530 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:30:01.530 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:30:01.530 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:30:01.530 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:30:01.530 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:01.530 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:30:01.530 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:30:01.530 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:01.530 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:01.530 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:30:01.530 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:01.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.531 --rc genhtml_branch_coverage=1 00:30:01.531 --rc genhtml_function_coverage=1 00:30:01.531 --rc genhtml_legend=1 00:30:01.531 --rc geninfo_all_blocks=1 00:30:01.531 --rc geninfo_unexecuted_blocks=1 00:30:01.531 00:30:01.531 ' 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:01.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.531 --rc genhtml_branch_coverage=1 00:30:01.531 --rc genhtml_function_coverage=1 00:30:01.531 --rc genhtml_legend=1 00:30:01.531 --rc geninfo_all_blocks=1 00:30:01.531 --rc geninfo_unexecuted_blocks=1 00:30:01.531 00:30:01.531 ' 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:01.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.531 --rc genhtml_branch_coverage=1 00:30:01.531 --rc genhtml_function_coverage=1 00:30:01.531 --rc genhtml_legend=1 00:30:01.531 --rc geninfo_all_blocks=1 00:30:01.531 --rc geninfo_unexecuted_blocks=1 00:30:01.531 00:30:01.531 ' 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:01.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.531 --rc genhtml_branch_coverage=1 00:30:01.531 --rc genhtml_function_coverage=1 00:30:01.531 --rc genhtml_legend=1 00:30:01.531 --rc geninfo_all_blocks=1 00:30:01.531 --rc geninfo_unexecuted_blocks=1 00:30:01.531 00:30:01.531 ' 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:01.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=7cbaf1f299ff4b6eb94e54fb22192870 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:30:01.531 03:11:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:08.102 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:08.102 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.102 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:08.103 Found net devices under 0000:af:00.0: cvl_0_0 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:08.103 Found net devices under 0000:af:00.1: cvl_0_1 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:08.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:08.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:30:08.103 00:30:08.103 --- 10.0.0.2 ping statistics --- 00:30:08.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.103 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:08.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:08.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:30:08.103 00:30:08.103 --- 10.0.0.1 ping statistics --- 00:30:08.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.103 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=353474 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 353474 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 353474 ']' 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.103 [2024-12-14 03:11:22.536079] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:30:08.103 [2024-12-14 03:11:22.536137] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:08.103 [2024-12-14 03:11:22.613528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.103 [2024-12-14 03:11:22.634426] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:08.103 [2024-12-14 03:11:22.634459] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:08.103 [2024-12-14 03:11:22.634467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:08.103 [2024-12-14 03:11:22.634473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:08.103 [2024-12-14 03:11:22.634479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:08.103 [2024-12-14 03:11:22.634957] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.103 [2024-12-14 03:11:22.777687] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.103 null0 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.103 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7cbaf1f299ff4b6eb94e54fb22192870 00:30:08.104 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.104 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.104 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.104 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:08.104 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.104 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.104 [2024-12-14 03:11:22.821916] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.104 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.104 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:08.104 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.104 03:11:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.104 nvme0n1 00:30:08.104 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.104 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:08.104 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.104 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.104 [ 00:30:08.104 { 00:30:08.104 "name": "nvme0n1", 00:30:08.104 "aliases": [ 00:30:08.104 "7cbaf1f2-99ff-4b6e-b94e-54fb22192870" 00:30:08.104 ], 00:30:08.104 "product_name": "NVMe disk", 00:30:08.104 "block_size": 512, 00:30:08.104 "num_blocks": 2097152, 00:30:08.104 "uuid": "7cbaf1f2-99ff-4b6e-b94e-54fb22192870", 00:30:08.104 "numa_id": 1, 00:30:08.104 "assigned_rate_limits": { 00:30:08.104 "rw_ios_per_sec": 0, 00:30:08.104 "rw_mbytes_per_sec": 0, 00:30:08.104 "r_mbytes_per_sec": 0, 00:30:08.104 "w_mbytes_per_sec": 0 00:30:08.104 }, 00:30:08.104 "claimed": false, 00:30:08.104 "zoned": false, 00:30:08.104 "supported_io_types": { 00:30:08.104 "read": true, 00:30:08.104 "write": true, 00:30:08.104 "unmap": false, 00:30:08.104 "flush": true, 00:30:08.104 "reset": true, 00:30:08.104 "nvme_admin": true, 00:30:08.104 "nvme_io": true, 00:30:08.104 "nvme_io_md": false, 00:30:08.104 "write_zeroes": true, 00:30:08.104 "zcopy": false, 00:30:08.104 "get_zone_info": false, 00:30:08.104 "zone_management": false, 00:30:08.104 "zone_append": false, 00:30:08.104 "compare": true, 00:30:08.104 "compare_and_write": true, 00:30:08.104 "abort": true, 00:30:08.104 "seek_hole": false, 00:30:08.104 "seek_data": false, 00:30:08.104 "copy": true, 00:30:08.104 "nvme_iov_md": false 00:30:08.104 }, 00:30:08.104 "memory_domains": [ 00:30:08.104 { 00:30:08.104 "dma_device_id": "system", 00:30:08.104 "dma_device_type": 1 00:30:08.104 } 00:30:08.104 ], 00:30:08.104 "driver_specific": { 00:30:08.104 "nvme": [ 00:30:08.104 { 00:30:08.104 "trid": { 00:30:08.104 "trtype": "TCP", 00:30:08.104 "adrfam": "IPv4", 00:30:08.104 "traddr": "10.0.0.2", 00:30:08.104 "trsvcid": "4420", 00:30:08.104 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:08.104 }, 00:30:08.104 "ctrlr_data": { 00:30:08.104 "cntlid": 1, 00:30:08.104 "vendor_id": "0x8086", 00:30:08.104 "model_number": "SPDK bdev Controller", 00:30:08.104 "serial_number": "00000000000000000000", 00:30:08.104 "firmware_revision": "25.01", 00:30:08.104 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:08.104 "oacs": { 00:30:08.104 "security": 0, 00:30:08.104 "format": 0, 00:30:08.104 "firmware": 0, 00:30:08.104 "ns_manage": 0 00:30:08.104 }, 00:30:08.104 "multi_ctrlr": true, 00:30:08.104 "ana_reporting": false 00:30:08.104 }, 00:30:08.104 "vs": { 00:30:08.104 "nvme_version": "1.3" 00:30:08.104 }, 00:30:08.104 "ns_data": { 00:30:08.104 "id": 1, 00:30:08.104 "can_share": true 00:30:08.104 } 00:30:08.104 } 00:30:08.104 ], 00:30:08.104 "mp_policy": "active_passive" 00:30:08.104 } 00:30:08.104 } 00:30:08.104 ] 00:30:08.104 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.104 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:08.104 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.104 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.104 [2024-12-14 03:11:23.083401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:08.104 [2024-12-14 03:11:23.083457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2287a90 (9): Bad file descriptor 00:30:08.104 [2024-12-14 03:11:23.215386] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:30:08.104 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.104 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:08.104 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.104 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.104 [ 00:30:08.104 { 00:30:08.104 "name": "nvme0n1", 00:30:08.104 "aliases": [ 00:30:08.104 "7cbaf1f2-99ff-4b6e-b94e-54fb22192870" 00:30:08.104 ], 00:30:08.104 "product_name": "NVMe disk", 00:30:08.104 "block_size": 512, 00:30:08.104 "num_blocks": 2097152, 00:30:08.104 "uuid": "7cbaf1f2-99ff-4b6e-b94e-54fb22192870", 00:30:08.104 "numa_id": 1, 00:30:08.104 "assigned_rate_limits": { 00:30:08.104 "rw_ios_per_sec": 0, 00:30:08.104 "rw_mbytes_per_sec": 0, 00:30:08.104 "r_mbytes_per_sec": 0, 00:30:08.104 "w_mbytes_per_sec": 0 00:30:08.104 }, 00:30:08.104 "claimed": false, 00:30:08.104 "zoned": false, 00:30:08.104 "supported_io_types": { 00:30:08.104 "read": true, 00:30:08.104 "write": true, 00:30:08.104 "unmap": false, 00:30:08.104 "flush": true, 00:30:08.104 "reset": true, 00:30:08.104 "nvme_admin": true, 00:30:08.104 "nvme_io": true, 00:30:08.104 "nvme_io_md": false, 00:30:08.104 "write_zeroes": true, 00:30:08.104 "zcopy": false, 00:30:08.104 "get_zone_info": false, 00:30:08.104 "zone_management": false, 00:30:08.104 "zone_append": false, 00:30:08.104 "compare": true, 00:30:08.104 "compare_and_write": true, 00:30:08.104 "abort": true, 00:30:08.104 "seek_hole": false, 00:30:08.104 "seek_data": false, 00:30:08.104 "copy": true, 00:30:08.104 "nvme_iov_md": false 00:30:08.104 }, 00:30:08.104 "memory_domains": [ 00:30:08.104 { 00:30:08.104 "dma_device_id": "system", 00:30:08.104 "dma_device_type": 1 00:30:08.104 } 00:30:08.104 ], 00:30:08.104 "driver_specific": { 00:30:08.104 "nvme": [ 00:30:08.104 { 00:30:08.104 "trid": { 00:30:08.104 "trtype": "TCP", 00:30:08.104 "adrfam": "IPv4", 00:30:08.104 "traddr": "10.0.0.2", 00:30:08.104 "trsvcid": "4420", 00:30:08.104 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:08.104 }, 00:30:08.104 "ctrlr_data": { 00:30:08.104 "cntlid": 2, 00:30:08.104 "vendor_id": "0x8086", 00:30:08.104 "model_number": "SPDK bdev Controller", 00:30:08.104 "serial_number": "00000000000000000000", 00:30:08.104 "firmware_revision": "25.01", 00:30:08.104 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:08.104 "oacs": { 00:30:08.104 "security": 0, 00:30:08.363 "format": 0, 00:30:08.364 "firmware": 0, 00:30:08.364 "ns_manage": 0 00:30:08.364 }, 00:30:08.364 "multi_ctrlr": true, 00:30:08.364 "ana_reporting": false 00:30:08.364 }, 00:30:08.364 "vs": { 00:30:08.364 "nvme_version": "1.3" 00:30:08.364 }, 00:30:08.364 "ns_data": { 00:30:08.364 "id": 1, 00:30:08.364 "can_share": true 00:30:08.364 } 00:30:08.364 } 00:30:08.364 ], 00:30:08.364 "mp_policy": "active_passive" 00:30:08.364 } 00:30:08.364 } 00:30:08.364 ] 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.9BFDEXj4pK 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.9BFDEXj4pK 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.9BFDEXj4pK 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.364 [2024-12-14 03:11:23.288026] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:08.364 [2024-12-14 03:11:23.288116] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.364 [2024-12-14 03:11:23.304090] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:08.364 nvme0n1 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.364 [ 00:30:08.364 { 00:30:08.364 "name": "nvme0n1", 00:30:08.364 "aliases": [ 00:30:08.364 "7cbaf1f2-99ff-4b6e-b94e-54fb22192870" 00:30:08.364 ], 00:30:08.364 "product_name": "NVMe disk", 00:30:08.364 "block_size": 512, 00:30:08.364 "num_blocks": 2097152, 00:30:08.364 "uuid": "7cbaf1f2-99ff-4b6e-b94e-54fb22192870", 00:30:08.364 "numa_id": 1, 00:30:08.364 "assigned_rate_limits": { 00:30:08.364 "rw_ios_per_sec": 0, 00:30:08.364 "rw_mbytes_per_sec": 0, 00:30:08.364 "r_mbytes_per_sec": 0, 00:30:08.364 "w_mbytes_per_sec": 0 00:30:08.364 }, 00:30:08.364 "claimed": false, 00:30:08.364 "zoned": false, 00:30:08.364 "supported_io_types": { 00:30:08.364 "read": true, 00:30:08.364 "write": true, 00:30:08.364 "unmap": false, 00:30:08.364 "flush": true, 00:30:08.364 "reset": true, 00:30:08.364 "nvme_admin": true, 00:30:08.364 "nvme_io": true, 00:30:08.364 "nvme_io_md": false, 00:30:08.364 "write_zeroes": true, 00:30:08.364 "zcopy": false, 00:30:08.364 "get_zone_info": false, 00:30:08.364 "zone_management": false, 00:30:08.364 "zone_append": false, 00:30:08.364 "compare": true, 00:30:08.364 "compare_and_write": true, 00:30:08.364 "abort": true, 00:30:08.364 "seek_hole": false, 00:30:08.364 "seek_data": false, 00:30:08.364 "copy": true, 00:30:08.364 "nvme_iov_md": false 00:30:08.364 }, 00:30:08.364 "memory_domains": [ 00:30:08.364 { 00:30:08.364 "dma_device_id": "system", 00:30:08.364 "dma_device_type": 1 00:30:08.364 } 00:30:08.364 ], 00:30:08.364 "driver_specific": { 00:30:08.364 "nvme": [ 00:30:08.364 { 00:30:08.364 "trid": { 00:30:08.364 "trtype": "TCP", 00:30:08.364 "adrfam": "IPv4", 00:30:08.364 "traddr": "10.0.0.2", 00:30:08.364 "trsvcid": "4421", 00:30:08.364 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:08.364 }, 00:30:08.364 "ctrlr_data": { 00:30:08.364 "cntlid": 3, 00:30:08.364 "vendor_id": "0x8086", 00:30:08.364 "model_number": "SPDK bdev Controller", 00:30:08.364 "serial_number": "00000000000000000000", 00:30:08.364 "firmware_revision": "25.01", 00:30:08.364 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:08.364 "oacs": { 00:30:08.364 "security": 0, 00:30:08.364 "format": 0, 00:30:08.364 "firmware": 0, 00:30:08.364 "ns_manage": 0 00:30:08.364 }, 00:30:08.364 "multi_ctrlr": true, 00:30:08.364 "ana_reporting": false 00:30:08.364 }, 00:30:08.364 "vs": { 00:30:08.364 "nvme_version": "1.3" 00:30:08.364 }, 00:30:08.364 "ns_data": { 00:30:08.364 "id": 1, 00:30:08.364 "can_share": true 00:30:08.364 } 00:30:08.364 } 00:30:08.364 ], 00:30:08.364 "mp_policy": "active_passive" 00:30:08.364 } 00:30:08.364 } 00:30:08.364 ] 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.9BFDEXj4pK 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:08.364 rmmod nvme_tcp 00:30:08.364 rmmod nvme_fabrics 00:30:08.364 rmmod nvme_keyring 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 353474 ']' 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 353474 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 353474 ']' 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 353474 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:08.364 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 353474 00:30:08.624 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:08.624 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:08.624 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 353474' 00:30:08.624 killing process with pid 353474 00:30:08.624 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 353474 00:30:08.624 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 353474 00:30:08.624 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:08.624 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:08.624 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:08.624 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:30:08.624 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:30:08.624 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:08.624 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:30:08.624 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:08.624 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:08.624 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.624 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.624 03:11:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:11.158 00:30:11.158 real 0m9.303s 00:30:11.158 user 0m3.046s 00:30:11.158 sys 0m4.685s 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:11.158 ************************************ 00:30:11.158 END TEST nvmf_async_init 00:30:11.158 ************************************ 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.158 ************************************ 00:30:11.158 START TEST dma 00:30:11.158 ************************************ 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:11.158 * Looking for test storage... 00:30:11.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:11.158 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:11.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.159 --rc genhtml_branch_coverage=1 00:30:11.159 --rc genhtml_function_coverage=1 00:30:11.159 --rc genhtml_legend=1 00:30:11.159 --rc geninfo_all_blocks=1 00:30:11.159 --rc geninfo_unexecuted_blocks=1 00:30:11.159 00:30:11.159 ' 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:11.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.159 --rc genhtml_branch_coverage=1 00:30:11.159 --rc genhtml_function_coverage=1 00:30:11.159 --rc genhtml_legend=1 00:30:11.159 --rc geninfo_all_blocks=1 00:30:11.159 --rc geninfo_unexecuted_blocks=1 00:30:11.159 00:30:11.159 ' 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:11.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.159 --rc genhtml_branch_coverage=1 00:30:11.159 --rc genhtml_function_coverage=1 00:30:11.159 --rc genhtml_legend=1 00:30:11.159 --rc geninfo_all_blocks=1 00:30:11.159 --rc geninfo_unexecuted_blocks=1 00:30:11.159 00:30:11.159 ' 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:11.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.159 --rc genhtml_branch_coverage=1 00:30:11.159 --rc genhtml_function_coverage=1 00:30:11.159 --rc genhtml_legend=1 00:30:11.159 --rc geninfo_all_blocks=1 00:30:11.159 --rc geninfo_unexecuted_blocks=1 00:30:11.159 00:30:11.159 ' 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:11.159 03:11:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:11.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:30:11.159 00:30:11.159 real 0m0.212s 00:30:11.159 user 0m0.129s 00:30:11.159 sys 0m0.096s 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:11.159 ************************************ 00:30:11.159 END TEST dma 00:30:11.159 ************************************ 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.159 ************************************ 00:30:11.159 START TEST nvmf_identify 00:30:11.159 ************************************ 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:11.159 * Looking for test storage... 00:30:11.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:11.159 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:11.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.160 --rc genhtml_branch_coverage=1 00:30:11.160 --rc genhtml_function_coverage=1 00:30:11.160 --rc genhtml_legend=1 00:30:11.160 --rc geninfo_all_blocks=1 00:30:11.160 --rc geninfo_unexecuted_blocks=1 00:30:11.160 00:30:11.160 ' 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:11.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.160 --rc genhtml_branch_coverage=1 00:30:11.160 --rc genhtml_function_coverage=1 00:30:11.160 --rc genhtml_legend=1 00:30:11.160 --rc geninfo_all_blocks=1 00:30:11.160 --rc geninfo_unexecuted_blocks=1 00:30:11.160 00:30:11.160 ' 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:11.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.160 --rc genhtml_branch_coverage=1 00:30:11.160 --rc genhtml_function_coverage=1 00:30:11.160 --rc genhtml_legend=1 00:30:11.160 --rc geninfo_all_blocks=1 00:30:11.160 --rc geninfo_unexecuted_blocks=1 00:30:11.160 00:30:11.160 ' 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:11.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.160 --rc genhtml_branch_coverage=1 00:30:11.160 --rc genhtml_function_coverage=1 00:30:11.160 --rc genhtml_legend=1 00:30:11.160 --rc geninfo_all_blocks=1 00:30:11.160 --rc geninfo_unexecuted_blocks=1 00:30:11.160 00:30:11.160 ' 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:11.160 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:11.419 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:11.419 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:11.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:11.419 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:11.419 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:11.419 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:11.419 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:11.419 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:11.419 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:11.419 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:11.419 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:11.419 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:11.419 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:11.419 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:11.419 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.419 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:11.419 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.419 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:11.419 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:11.420 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:30:11.420 03:11:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:17.995 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:17.995 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:30:17.995 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:17.995 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:17.995 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:17.995 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:17.995 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:17.995 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:30:17.995 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:17.995 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:30:17.995 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:30:17.995 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:30:17.995 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:30:17.995 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:30:17.995 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:30:17.995 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:17.995 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:17.995 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:17.995 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:17.996 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:17.996 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:17.996 Found net devices under 0000:af:00.0: cvl_0_0 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:17.996 Found net devices under 0000:af:00.1: cvl_0_1 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:17.996 03:11:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:17.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:17.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:30:17.996 00:30:17.996 --- 10.0.0.2 ping statistics --- 00:30:17.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.996 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:17.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:17.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:30:17.996 00:30:17.996 --- 10.0.0.1 ping statistics --- 00:30:17.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.996 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=355814 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 355814 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 355814 ']' 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:17.996 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:17.996 [2024-12-14 03:11:32.219993] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:30:17.996 [2024-12-14 03:11:32.220034] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:17.996 [2024-12-14 03:11:32.295755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:17.997 [2024-12-14 03:11:32.319451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:17.997 [2024-12-14 03:11:32.319486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:17.997 [2024-12-14 03:11:32.319493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:17.997 [2024-12-14 03:11:32.319499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:17.997 [2024-12-14 03:11:32.319504] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:17.997 [2024-12-14 03:11:32.320761] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.997 [2024-12-14 03:11:32.320875] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:17.997 [2024-12-14 03:11:32.320982] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.997 [2024-12-14 03:11:32.320983] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:17.997 [2024-12-14 03:11:32.412247] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:17.997 Malloc0 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:17.997 [2024-12-14 03:11:32.512703] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:17.997 [ 00:30:17.997 { 00:30:17.997 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:17.997 "subtype": "Discovery", 00:30:17.997 "listen_addresses": [ 00:30:17.997 { 00:30:17.997 "trtype": "TCP", 00:30:17.997 "adrfam": "IPv4", 00:30:17.997 "traddr": "10.0.0.2", 00:30:17.997 "trsvcid": "4420" 00:30:17.997 } 00:30:17.997 ], 00:30:17.997 "allow_any_host": true, 00:30:17.997 "hosts": [] 00:30:17.997 }, 00:30:17.997 { 00:30:17.997 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:17.997 "subtype": "NVMe", 00:30:17.997 "listen_addresses": [ 00:30:17.997 { 00:30:17.997 "trtype": "TCP", 00:30:17.997 "adrfam": "IPv4", 00:30:17.997 "traddr": "10.0.0.2", 00:30:17.997 "trsvcid": "4420" 00:30:17.997 } 00:30:17.997 ], 00:30:17.997 "allow_any_host": true, 00:30:17.997 "hosts": [], 00:30:17.997 "serial_number": "SPDK00000000000001", 00:30:17.997 "model_number": "SPDK bdev Controller", 00:30:17.997 "max_namespaces": 32, 00:30:17.997 "min_cntlid": 1, 00:30:17.997 "max_cntlid": 65519, 00:30:17.997 "namespaces": [ 00:30:17.997 { 00:30:17.997 "nsid": 1, 00:30:17.997 "bdev_name": "Malloc0", 00:30:17.997 "name": "Malloc0", 00:30:17.997 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:17.997 "eui64": "ABCDEF0123456789", 00:30:17.997 "uuid": "ebbfd9b9-eb24-465f-958d-cc38bee2789f" 00:30:17.997 } 00:30:17.997 ] 00:30:17.997 } 00:30:17.997 ] 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.997 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:17.997 [2024-12-14 03:11:32.568335] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:30:17.997 [2024-12-14 03:11:32.568369] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid355846 ] 00:30:17.997 [2024-12-14 03:11:32.609262] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:30:17.997 [2024-12-14 03:11:32.609306] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:17.997 [2024-12-14 03:11:32.613318] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:17.997 [2024-12-14 03:11:32.613337] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:17.997 [2024-12-14 03:11:32.613346] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:17.997 [2024-12-14 03:11:32.613864] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:30:17.997 [2024-12-14 03:11:32.613894] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x231ded0 0 00:30:17.997 [2024-12-14 03:11:32.620325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:17.997 [2024-12-14 03:11:32.620338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:17.997 [2024-12-14 03:11:32.620342] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:17.997 [2024-12-14 03:11:32.620345] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:17.997 [2024-12-14 03:11:32.620376] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:17.997 [2024-12-14 03:11:32.620381] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:17.997 [2024-12-14 03:11:32.620385] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x231ded0) 00:30:17.997 [2024-12-14 03:11:32.620396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:17.997 [2024-12-14 03:11:32.620412] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2389540, cid 0, qid 0 00:30:17.997 [2024-12-14 03:11:32.627323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:17.997 [2024-12-14 03:11:32.627331] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:17.997 [2024-12-14 03:11:32.627334] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:17.997 [2024-12-14 03:11:32.627338] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2389540) on tqpair=0x231ded0 00:30:17.997 [2024-12-14 03:11:32.627350] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:17.997 [2024-12-14 03:11:32.627356] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:30:17.997 [2024-12-14 03:11:32.627361] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:30:17.997 [2024-12-14 03:11:32.627371] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:17.997 [2024-12-14 03:11:32.627375] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:17.997 [2024-12-14 03:11:32.627378] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x231ded0) 00:30:17.997 [2024-12-14 03:11:32.627385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.997 [2024-12-14 03:11:32.627397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2389540, cid 0, qid 0 00:30:17.997 [2024-12-14 03:11:32.627558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:17.997 [2024-12-14 03:11:32.627564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:17.997 [2024-12-14 03:11:32.627567] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:17.997 [2024-12-14 03:11:32.627573] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2389540) on tqpair=0x231ded0 00:30:17.997 [2024-12-14 03:11:32.627578] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:30:17.997 [2024-12-14 03:11:32.627584] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:30:17.997 [2024-12-14 03:11:32.627591] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:17.997 [2024-12-14 03:11:32.627594] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:17.997 [2024-12-14 03:11:32.627597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x231ded0) 00:30:17.997 [2024-12-14 03:11:32.627603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.997 [2024-12-14 03:11:32.627613] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2389540, cid 0, qid 0 00:30:17.997 [2024-12-14 03:11:32.627708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:17.997 [2024-12-14 03:11:32.627714] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:17.997 [2024-12-14 03:11:32.627717] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:17.997 [2024-12-14 03:11:32.627720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2389540) on tqpair=0x231ded0 00:30:17.998 [2024-12-14 03:11:32.627724] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:30:17.998 [2024-12-14 03:11:32.627731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:17.998 [2024-12-14 03:11:32.627736] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.627739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.627742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x231ded0) 00:30:17.998 [2024-12-14 03:11:32.627748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.998 [2024-12-14 03:11:32.627757] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2389540, cid 0, qid 0 00:30:17.998 [2024-12-14 03:11:32.627858] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:17.998 [2024-12-14 03:11:32.627864] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:17.998 [2024-12-14 03:11:32.627867] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.627870] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2389540) on tqpair=0x231ded0 00:30:17.998 [2024-12-14 03:11:32.627874] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:17.998 [2024-12-14 03:11:32.627882] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.627886] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.627889] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x231ded0) 00:30:17.998 [2024-12-14 03:11:32.627894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.998 [2024-12-14 03:11:32.627903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2389540, cid 0, qid 0 00:30:17.998 [2024-12-14 03:11:32.628009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:17.998 [2024-12-14 03:11:32.628014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:17.998 [2024-12-14 03:11:32.628017] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628021] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2389540) on tqpair=0x231ded0 00:30:17.998 [2024-12-14 03:11:32.628024] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:17.998 [2024-12-14 03:11:32.628030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:17.998 [2024-12-14 03:11:32.628037] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:17.998 [2024-12-14 03:11:32.628145] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:30:17.998 [2024-12-14 03:11:32.628149] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:17.998 [2024-12-14 03:11:32.628156] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628159] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x231ded0) 00:30:17.998 [2024-12-14 03:11:32.628168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.998 [2024-12-14 03:11:32.628177] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2389540, cid 0, qid 0 00:30:17.998 [2024-12-14 03:11:32.628236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:17.998 [2024-12-14 03:11:32.628242] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:17.998 [2024-12-14 03:11:32.628245] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628248] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2389540) on tqpair=0x231ded0 00:30:17.998 [2024-12-14 03:11:32.628252] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:17.998 [2024-12-14 03:11:32.628260] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x231ded0) 00:30:17.998 [2024-12-14 03:11:32.628272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.998 [2024-12-14 03:11:32.628281] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2389540, cid 0, qid 0 00:30:17.998 [2024-12-14 03:11:32.628345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:17.998 [2024-12-14 03:11:32.628351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:17.998 [2024-12-14 03:11:32.628354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2389540) on tqpair=0x231ded0 00:30:17.998 [2024-12-14 03:11:32.628361] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:17.998 [2024-12-14 03:11:32.628365] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:17.998 [2024-12-14 03:11:32.628372] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:30:17.998 [2024-12-14 03:11:32.628379] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:17.998 [2024-12-14 03:11:32.628386] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x231ded0) 00:30:17.998 [2024-12-14 03:11:32.628395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.998 [2024-12-14 03:11:32.628406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2389540, cid 0, qid 0 00:30:17.998 [2024-12-14 03:11:32.628498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:17.998 [2024-12-14 03:11:32.628504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:17.998 [2024-12-14 03:11:32.628507] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628511] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x231ded0): datao=0, datal=4096, cccid=0 00:30:17.998 [2024-12-14 03:11:32.628515] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2389540) on tqpair(0x231ded0): expected_datao=0, payload_size=4096 00:30:17.998 [2024-12-14 03:11:32.628519] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628525] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628528] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628546] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:17.998 [2024-12-14 03:11:32.628551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:17.998 [2024-12-14 03:11:32.628554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2389540) on tqpair=0x231ded0 00:30:17.998 [2024-12-14 03:11:32.628564] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:30:17.998 [2024-12-14 03:11:32.628568] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:30:17.998 [2024-12-14 03:11:32.628572] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:30:17.998 [2024-12-14 03:11:32.628576] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:30:17.998 [2024-12-14 03:11:32.628580] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:30:17.998 [2024-12-14 03:11:32.628584] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:30:17.998 [2024-12-14 03:11:32.628593] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:17.998 [2024-12-14 03:11:32.628603] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628606] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628609] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x231ded0) 00:30:17.998 [2024-12-14 03:11:32.628615] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:17.998 [2024-12-14 03:11:32.628624] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2389540, cid 0, qid 0 00:30:17.998 [2024-12-14 03:11:32.628696] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:17.998 [2024-12-14 03:11:32.628702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:17.998 [2024-12-14 03:11:32.628705] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2389540) on tqpair=0x231ded0 00:30:17.998 [2024-12-14 03:11:32.628714] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628720] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x231ded0) 00:30:17.998 [2024-12-14 03:11:32.628725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:17.998 [2024-12-14 03:11:32.628730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628739] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x231ded0) 00:30:17.998 [2024-12-14 03:11:32.628743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:17.998 [2024-12-14 03:11:32.628748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628754] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x231ded0) 00:30:17.998 [2024-12-14 03:11:32.628759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:17.998 [2024-12-14 03:11:32.628764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628767] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:17.998 [2024-12-14 03:11:32.628770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x231ded0) 00:30:17.998 [2024-12-14 03:11:32.628774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:17.998 [2024-12-14 03:11:32.628778] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:17.998 [2024-12-14 03:11:32.628789] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:17.998 [2024-12-14 03:11:32.628794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.628797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x231ded0) 00:30:17.999 [2024-12-14 03:11:32.628802] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.999 [2024-12-14 03:11:32.628813] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2389540, cid 0, qid 0 00:30:17.999 [2024-12-14 03:11:32.628817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23896c0, cid 1, qid 0 00:30:17.999 [2024-12-14 03:11:32.628821] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2389840, cid 2, qid 0 00:30:17.999 [2024-12-14 03:11:32.628825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23899c0, cid 3, qid 0 00:30:17.999 [2024-12-14 03:11:32.628829] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2389b40, cid 4, qid 0 00:30:17.999 [2024-12-14 03:11:32.628948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:17.999 [2024-12-14 03:11:32.628954] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:17.999 [2024-12-14 03:11:32.628957] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.628960] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2389b40) on tqpair=0x231ded0 00:30:17.999 [2024-12-14 03:11:32.628964] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:30:17.999 [2024-12-14 03:11:32.628968] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:30:17.999 [2024-12-14 03:11:32.628977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.628980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x231ded0) 00:30:17.999 [2024-12-14 03:11:32.628985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.999 [2024-12-14 03:11:32.628994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2389b40, cid 4, qid 0 00:30:17.999 [2024-12-14 03:11:32.629061] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:17.999 [2024-12-14 03:11:32.629066] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:17.999 [2024-12-14 03:11:32.629071] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.629074] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x231ded0): datao=0, datal=4096, cccid=4 00:30:17.999 [2024-12-14 03:11:32.629078] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2389b40) on tqpair(0x231ded0): expected_datao=0, payload_size=4096 00:30:17.999 [2024-12-14 03:11:32.629081] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.629101] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.629105] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.629148] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:17.999 [2024-12-14 03:11:32.629153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:17.999 [2024-12-14 03:11:32.629156] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.629159] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2389b40) on tqpair=0x231ded0 00:30:17.999 [2024-12-14 03:11:32.629170] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:30:17.999 [2024-12-14 03:11:32.629191] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.629195] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x231ded0) 00:30:17.999 [2024-12-14 03:11:32.629201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.999 [2024-12-14 03:11:32.629206] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.629209] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.629212] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x231ded0) 00:30:17.999 [2024-12-14 03:11:32.629217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:17.999 [2024-12-14 03:11:32.629229] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2389b40, cid 4, qid 0 00:30:17.999 [2024-12-14 03:11:32.629233] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2389cc0, cid 5, qid 0 00:30:17.999 [2024-12-14 03:11:32.629335] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:17.999 [2024-12-14 03:11:32.629342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:17.999 [2024-12-14 03:11:32.629344] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.629347] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x231ded0): datao=0, datal=1024, cccid=4 00:30:17.999 [2024-12-14 03:11:32.629351] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2389b40) on tqpair(0x231ded0): expected_datao=0, payload_size=1024 00:30:17.999 [2024-12-14 03:11:32.629355] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.629360] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.629363] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.629368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:17.999 [2024-12-14 03:11:32.629373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:17.999 [2024-12-14 03:11:32.629375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.629379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2389cc0) on tqpair=0x231ded0 00:30:17.999 [2024-12-14 03:11:32.673322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:17.999 [2024-12-14 03:11:32.673333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:17.999 [2024-12-14 03:11:32.673336] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.673340] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2389b40) on tqpair=0x231ded0 00:30:17.999 [2024-12-14 03:11:32.673353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.673357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x231ded0) 00:30:17.999 [2024-12-14 03:11:32.673364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.999 [2024-12-14 03:11:32.673380] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2389b40, cid 4, qid 0 00:30:17.999 [2024-12-14 03:11:32.673535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:17.999 [2024-12-14 03:11:32.673541] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:17.999 [2024-12-14 03:11:32.673544] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.673547] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x231ded0): datao=0, datal=3072, cccid=4 00:30:17.999 [2024-12-14 03:11:32.673551] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2389b40) on tqpair(0x231ded0): expected_datao=0, payload_size=3072 00:30:17.999 [2024-12-14 03:11:32.673555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.673561] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.673564] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.673632] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:17.999 [2024-12-14 03:11:32.673638] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:17.999 [2024-12-14 03:11:32.673641] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.673644] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2389b40) on tqpair=0x231ded0 00:30:17.999 [2024-12-14 03:11:32.673651] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.673655] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x231ded0) 00:30:17.999 [2024-12-14 03:11:32.673660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.999 [2024-12-14 03:11:32.673672] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2389b40, cid 4, qid 0 00:30:17.999 [2024-12-14 03:11:32.673783] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:17.999 [2024-12-14 03:11:32.673788] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:17.999 [2024-12-14 03:11:32.673791] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.673794] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x231ded0): datao=0, datal=8, cccid=4 00:30:17.999 [2024-12-14 03:11:32.673798] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2389b40) on tqpair(0x231ded0): expected_datao=0, payload_size=8 00:30:17.999 [2024-12-14 03:11:32.673802] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.673807] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.673810] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.715459] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:17.999 [2024-12-14 03:11:32.715471] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:17.999 [2024-12-14 03:11:32.715474] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:17.999 [2024-12-14 03:11:32.715478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2389b40) on tqpair=0x231ded0 00:30:17.999 ===================================================== 00:30:17.999 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:17.999 ===================================================== 00:30:17.999 Controller Capabilities/Features 00:30:17.999 ================================ 00:30:17.999 Vendor ID: 0000 00:30:17.999 Subsystem Vendor ID: 0000 00:30:17.999 Serial Number: .................... 00:30:17.999 Model Number: ........................................ 00:30:17.999 Firmware Version: 25.01 00:30:17.999 Recommended Arb Burst: 0 00:30:17.999 IEEE OUI Identifier: 00 00 00 00:30:17.999 Multi-path I/O 00:30:17.999 May have multiple subsystem ports: No 00:30:17.999 May have multiple controllers: No 00:30:17.999 Associated with SR-IOV VF: No 00:30:17.999 Max Data Transfer Size: 131072 00:30:17.999 Max Number of Namespaces: 0 00:30:17.999 Max Number of I/O Queues: 1024 00:30:17.999 NVMe Specification Version (VS): 1.3 00:30:17.999 NVMe Specification Version (Identify): 1.3 00:30:17.999 Maximum Queue Entries: 128 00:30:17.999 Contiguous Queues Required: Yes 00:30:17.999 Arbitration Mechanisms Supported 00:30:17.999 Weighted Round Robin: Not Supported 00:30:17.999 Vendor Specific: Not Supported 00:30:17.999 Reset Timeout: 15000 ms 00:30:17.999 Doorbell Stride: 4 bytes 00:30:17.999 NVM Subsystem Reset: Not Supported 00:30:17.999 Command Sets Supported 00:30:17.999 NVM Command Set: Supported 00:30:17.999 Boot Partition: Not Supported 00:30:17.999 Memory Page Size Minimum: 4096 bytes 00:30:17.999 Memory Page Size Maximum: 4096 bytes 00:30:17.999 Persistent Memory Region: Not Supported 00:30:17.999 Optional Asynchronous Events Supported 00:30:18.000 Namespace Attribute Notices: Not Supported 00:30:18.000 Firmware Activation Notices: Not Supported 00:30:18.000 ANA Change Notices: Not Supported 00:30:18.000 PLE Aggregate Log Change Notices: Not Supported 00:30:18.000 LBA Status Info Alert Notices: Not Supported 00:30:18.000 EGE Aggregate Log Change Notices: Not Supported 00:30:18.000 Normal NVM Subsystem Shutdown event: Not Supported 00:30:18.000 Zone Descriptor Change Notices: Not Supported 00:30:18.000 Discovery Log Change Notices: Supported 00:30:18.000 Controller Attributes 00:30:18.000 128-bit Host Identifier: Not Supported 00:30:18.000 Non-Operational Permissive Mode: Not Supported 00:30:18.000 NVM Sets: Not Supported 00:30:18.000 Read Recovery Levels: Not Supported 00:30:18.000 Endurance Groups: Not Supported 00:30:18.000 Predictable Latency Mode: Not Supported 00:30:18.000 Traffic Based Keep ALive: Not Supported 00:30:18.000 Namespace Granularity: Not Supported 00:30:18.000 SQ Associations: Not Supported 00:30:18.000 UUID List: Not Supported 00:30:18.000 Multi-Domain Subsystem: Not Supported 00:30:18.000 Fixed Capacity Management: Not Supported 00:30:18.000 Variable Capacity Management: Not Supported 00:30:18.000 Delete Endurance Group: Not Supported 00:30:18.000 Delete NVM Set: Not Supported 00:30:18.000 Extended LBA Formats Supported: Not Supported 00:30:18.000 Flexible Data Placement Supported: Not Supported 00:30:18.000 00:30:18.000 Controller Memory Buffer Support 00:30:18.000 ================================ 00:30:18.000 Supported: No 00:30:18.000 00:30:18.000 Persistent Memory Region Support 00:30:18.000 ================================ 00:30:18.000 Supported: No 00:30:18.000 00:30:18.000 Admin Command Set Attributes 00:30:18.000 ============================ 00:30:18.000 Security Send/Receive: Not Supported 00:30:18.000 Format NVM: Not Supported 00:30:18.000 Firmware Activate/Download: Not Supported 00:30:18.000 Namespace Management: Not Supported 00:30:18.000 Device Self-Test: Not Supported 00:30:18.000 Directives: Not Supported 00:30:18.000 NVMe-MI: Not Supported 00:30:18.000 Virtualization Management: Not Supported 00:30:18.000 Doorbell Buffer Config: Not Supported 00:30:18.000 Get LBA Status Capability: Not Supported 00:30:18.000 Command & Feature Lockdown Capability: Not Supported 00:30:18.000 Abort Command Limit: 1 00:30:18.000 Async Event Request Limit: 4 00:30:18.000 Number of Firmware Slots: N/A 00:30:18.000 Firmware Slot 1 Read-Only: N/A 00:30:18.000 Firmware Activation Without Reset: N/A 00:30:18.000 Multiple Update Detection Support: N/A 00:30:18.000 Firmware Update Granularity: No Information Provided 00:30:18.000 Per-Namespace SMART Log: No 00:30:18.000 Asymmetric Namespace Access Log Page: Not Supported 00:30:18.000 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:18.000 Command Effects Log Page: Not Supported 00:30:18.000 Get Log Page Extended Data: Supported 00:30:18.000 Telemetry Log Pages: Not Supported 00:30:18.000 Persistent Event Log Pages: Not Supported 00:30:18.000 Supported Log Pages Log Page: May Support 00:30:18.000 Commands Supported & Effects Log Page: Not Supported 00:30:18.000 Feature Identifiers & Effects Log Page:May Support 00:30:18.000 NVMe-MI Commands & Effects Log Page: May Support 00:30:18.000 Data Area 4 for Telemetry Log: Not Supported 00:30:18.000 Error Log Page Entries Supported: 128 00:30:18.000 Keep Alive: Not Supported 00:30:18.000 00:30:18.000 NVM Command Set Attributes 00:30:18.000 ========================== 00:30:18.000 Submission Queue Entry Size 00:30:18.000 Max: 1 00:30:18.000 Min: 1 00:30:18.000 Completion Queue Entry Size 00:30:18.000 Max: 1 00:30:18.000 Min: 1 00:30:18.000 Number of Namespaces: 0 00:30:18.000 Compare Command: Not Supported 00:30:18.000 Write Uncorrectable Command: Not Supported 00:30:18.000 Dataset Management Command: Not Supported 00:30:18.000 Write Zeroes Command: Not Supported 00:30:18.000 Set Features Save Field: Not Supported 00:30:18.000 Reservations: Not Supported 00:30:18.000 Timestamp: Not Supported 00:30:18.000 Copy: Not Supported 00:30:18.000 Volatile Write Cache: Not Present 00:30:18.000 Atomic Write Unit (Normal): 1 00:30:18.000 Atomic Write Unit (PFail): 1 00:30:18.000 Atomic Compare & Write Unit: 1 00:30:18.000 Fused Compare & Write: Supported 00:30:18.000 Scatter-Gather List 00:30:18.000 SGL Command Set: Supported 00:30:18.000 SGL Keyed: Supported 00:30:18.000 SGL Bit Bucket Descriptor: Not Supported 00:30:18.000 SGL Metadata Pointer: Not Supported 00:30:18.000 Oversized SGL: Not Supported 00:30:18.000 SGL Metadata Address: Not Supported 00:30:18.000 SGL Offset: Supported 00:30:18.000 Transport SGL Data Block: Not Supported 00:30:18.000 Replay Protected Memory Block: Not Supported 00:30:18.000 00:30:18.000 Firmware Slot Information 00:30:18.000 ========================= 00:30:18.000 Active slot: 0 00:30:18.000 00:30:18.000 00:30:18.000 Error Log 00:30:18.000 ========= 00:30:18.000 00:30:18.000 Active Namespaces 00:30:18.000 ================= 00:30:18.000 Discovery Log Page 00:30:18.000 ================== 00:30:18.000 Generation Counter: 2 00:30:18.000 Number of Records: 2 00:30:18.000 Record Format: 0 00:30:18.000 00:30:18.000 Discovery Log Entry 0 00:30:18.000 ---------------------- 00:30:18.000 Transport Type: 3 (TCP) 00:30:18.000 Address Family: 1 (IPv4) 00:30:18.000 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:18.000 Entry Flags: 00:30:18.000 Duplicate Returned Information: 1 00:30:18.000 Explicit Persistent Connection Support for Discovery: 1 00:30:18.000 Transport Requirements: 00:30:18.000 Secure Channel: Not Required 00:30:18.000 Port ID: 0 (0x0000) 00:30:18.000 Controller ID: 65535 (0xffff) 00:30:18.000 Admin Max SQ Size: 128 00:30:18.000 Transport Service Identifier: 4420 00:30:18.000 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:18.000 Transport Address: 10.0.0.2 00:30:18.000 Discovery Log Entry 1 00:30:18.000 ---------------------- 00:30:18.000 Transport Type: 3 (TCP) 00:30:18.000 Address Family: 1 (IPv4) 00:30:18.000 Subsystem Type: 2 (NVM Subsystem) 00:30:18.000 Entry Flags: 00:30:18.000 Duplicate Returned Information: 0 00:30:18.000 Explicit Persistent Connection Support for Discovery: 0 00:30:18.000 Transport Requirements: 00:30:18.000 Secure Channel: Not Required 00:30:18.000 Port ID: 0 (0x0000) 00:30:18.000 Controller ID: 65535 (0xffff) 00:30:18.000 Admin Max SQ Size: 128 00:30:18.000 Transport Service Identifier: 4420 00:30:18.000 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:18.000 Transport Address: 10.0.0.2 [2024-12-14 03:11:32.715556] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:30:18.000 [2024-12-14 03:11:32.715567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2389540) on tqpair=0x231ded0 00:30:18.000 [2024-12-14 03:11:32.715574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.000 [2024-12-14 03:11:32.715578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23896c0) on tqpair=0x231ded0 00:30:18.000 [2024-12-14 03:11:32.715584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.000 [2024-12-14 03:11:32.715588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2389840) on tqpair=0x231ded0 00:30:18.000 [2024-12-14 03:11:32.715592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.000 [2024-12-14 03:11:32.715596] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23899c0) on tqpair=0x231ded0 00:30:18.001 [2024-12-14 03:11:32.715600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-12-14 03:11:32.715607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.715611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.715614] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x231ded0) 00:30:18.001 [2024-12-14 03:11:32.715621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-12-14 03:11:32.715633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23899c0, cid 3, qid 0 00:30:18.001 [2024-12-14 03:11:32.715693] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.001 [2024-12-14 03:11:32.715699] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.001 [2024-12-14 03:11:32.715702] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.715706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23899c0) on tqpair=0x231ded0 00:30:18.001 [2024-12-14 03:11:32.715711] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.715714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.715717] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x231ded0) 00:30:18.001 [2024-12-14 03:11:32.715723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-12-14 03:11:32.715736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23899c0, cid 3, qid 0 00:30:18.001 [2024-12-14 03:11:32.715849] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.001 [2024-12-14 03:11:32.715854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.001 [2024-12-14 03:11:32.715857] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.715860] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23899c0) on tqpair=0x231ded0 00:30:18.001 [2024-12-14 03:11:32.715865] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:30:18.001 [2024-12-14 03:11:32.715869] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:30:18.001 [2024-12-14 03:11:32.715876] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.715880] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.715883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x231ded0) 00:30:18.001 [2024-12-14 03:11:32.715888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-12-14 03:11:32.715897] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23899c0, cid 3, qid 0 00:30:18.001 [2024-12-14 03:11:32.716000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.001 [2024-12-14 03:11:32.716006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.001 [2024-12-14 03:11:32.716009] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.716012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23899c0) on tqpair=0x231ded0 00:30:18.001 [2024-12-14 03:11:32.716022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.716026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.716029] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x231ded0) 00:30:18.001 [2024-12-14 03:11:32.716034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-12-14 03:11:32.716043] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23899c0, cid 3, qid 0 00:30:18.001 [2024-12-14 03:11:32.716102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.001 [2024-12-14 03:11:32.716108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.001 [2024-12-14 03:11:32.716111] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.716114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23899c0) on tqpair=0x231ded0 00:30:18.001 [2024-12-14 03:11:32.716122] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.716125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.716128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x231ded0) 00:30:18.001 [2024-12-14 03:11:32.716134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-12-14 03:11:32.716143] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23899c0, cid 3, qid 0 00:30:18.001 [2024-12-14 03:11:32.716204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.001 [2024-12-14 03:11:32.716209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.001 [2024-12-14 03:11:32.716212] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.716215] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23899c0) on tqpair=0x231ded0 00:30:18.001 [2024-12-14 03:11:32.716223] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.716227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.716230] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x231ded0) 00:30:18.001 [2024-12-14 03:11:32.716235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-12-14 03:11:32.716244] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23899c0, cid 3, qid 0 00:30:18.001 [2024-12-14 03:11:32.720321] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.001 [2024-12-14 03:11:32.720328] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.001 [2024-12-14 03:11:32.720331] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.720335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23899c0) on tqpair=0x231ded0 00:30:18.001 [2024-12-14 03:11:32.720344] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.720348] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.720351] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x231ded0) 00:30:18.001 [2024-12-14 03:11:32.720356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-12-14 03:11:32.720366] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23899c0, cid 3, qid 0 00:30:18.001 [2024-12-14 03:11:32.720552] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.001 [2024-12-14 03:11:32.720558] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.001 [2024-12-14 03:11:32.720560] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.720563] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23899c0) on tqpair=0x231ded0 00:30:18.001 [2024-12-14 03:11:32.720570] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:30:18.001 00:30:18.001 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:18.001 [2024-12-14 03:11:32.758599] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:30:18.001 [2024-12-14 03:11:32.758646] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid355849 ] 00:30:18.001 [2024-12-14 03:11:32.798481] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:30:18.001 [2024-12-14 03:11:32.798520] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:18.001 [2024-12-14 03:11:32.798525] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:18.001 [2024-12-14 03:11:32.798534] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:18.001 [2024-12-14 03:11:32.798541] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:18.001 [2024-12-14 03:11:32.802455] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:30:18.001 [2024-12-14 03:11:32.802480] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c47ed0 0 00:30:18.001 [2024-12-14 03:11:32.809324] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:18.001 [2024-12-14 03:11:32.809337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:18.001 [2024-12-14 03:11:32.809341] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:18.001 [2024-12-14 03:11:32.809344] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:18.001 [2024-12-14 03:11:32.809368] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.809372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.809376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c47ed0) 00:30:18.001 [2024-12-14 03:11:32.809386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:18.001 [2024-12-14 03:11:32.809402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3540, cid 0, qid 0 00:30:18.001 [2024-12-14 03:11:32.816321] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.001 [2024-12-14 03:11:32.816333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.001 [2024-12-14 03:11:32.816337] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.816340] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3540) on tqpair=0x1c47ed0 00:30:18.001 [2024-12-14 03:11:32.816349] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:18.001 [2024-12-14 03:11:32.816355] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:30:18.001 [2024-12-14 03:11:32.816360] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:30:18.001 [2024-12-14 03:11:32.816371] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.816374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.816378] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c47ed0) 00:30:18.001 [2024-12-14 03:11:32.816385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-12-14 03:11:32.816401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3540, cid 0, qid 0 00:30:18.001 [2024-12-14 03:11:32.816527] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.001 [2024-12-14 03:11:32.816533] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.001 [2024-12-14 03:11:32.816536] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.001 [2024-12-14 03:11:32.816540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3540) on tqpair=0x1c47ed0 00:30:18.001 [2024-12-14 03:11:32.816544] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:30:18.001 [2024-12-14 03:11:32.816550] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:30:18.002 [2024-12-14 03:11:32.816557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.816560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.816563] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c47ed0) 00:30:18.002 [2024-12-14 03:11:32.816569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-12-14 03:11:32.816579] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3540, cid 0, qid 0 00:30:18.002 [2024-12-14 03:11:32.816674] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.002 [2024-12-14 03:11:32.816680] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.002 [2024-12-14 03:11:32.816683] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.816686] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3540) on tqpair=0x1c47ed0 00:30:18.002 [2024-12-14 03:11:32.816690] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:30:18.002 [2024-12-14 03:11:32.816697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:18.002 [2024-12-14 03:11:32.816703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.816707] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.816710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c47ed0) 00:30:18.002 [2024-12-14 03:11:32.816715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-12-14 03:11:32.816725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3540, cid 0, qid 0 00:30:18.002 [2024-12-14 03:11:32.816826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.002 [2024-12-14 03:11:32.816832] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.002 [2024-12-14 03:11:32.816834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.816838] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3540) on tqpair=0x1c47ed0 00:30:18.002 [2024-12-14 03:11:32.816842] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:18.002 [2024-12-14 03:11:32.816850] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.816853] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.816856] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c47ed0) 00:30:18.002 [2024-12-14 03:11:32.816862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-12-14 03:11:32.816872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3540, cid 0, qid 0 00:30:18.002 [2024-12-14 03:11:32.816931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.002 [2024-12-14 03:11:32.816937] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.002 [2024-12-14 03:11:32.816941] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.816945] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3540) on tqpair=0x1c47ed0 00:30:18.002 [2024-12-14 03:11:32.816949] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:18.002 [2024-12-14 03:11:32.816953] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:18.002 [2024-12-14 03:11:32.816959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:18.002 [2024-12-14 03:11:32.817066] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:30:18.002 [2024-12-14 03:11:32.817071] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:18.002 [2024-12-14 03:11:32.817078] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.817081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.817084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c47ed0) 00:30:18.002 [2024-12-14 03:11:32.817089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-12-14 03:11:32.817099] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3540, cid 0, qid 0 00:30:18.002 [2024-12-14 03:11:32.817165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.002 [2024-12-14 03:11:32.817171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.002 [2024-12-14 03:11:32.817174] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.817177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3540) on tqpair=0x1c47ed0 00:30:18.002 [2024-12-14 03:11:32.817181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:18.002 [2024-12-14 03:11:32.817189] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.817192] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.817195] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c47ed0) 00:30:18.002 [2024-12-14 03:11:32.817201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-12-14 03:11:32.817210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3540, cid 0, qid 0 00:30:18.002 [2024-12-14 03:11:32.817325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.002 [2024-12-14 03:11:32.817332] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.002 [2024-12-14 03:11:32.817335] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.817338] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3540) on tqpair=0x1c47ed0 00:30:18.002 [2024-12-14 03:11:32.817342] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:18.002 [2024-12-14 03:11:32.817346] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:18.002 [2024-12-14 03:11:32.817353] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:30:18.002 [2024-12-14 03:11:32.817360] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:18.002 [2024-12-14 03:11:32.817367] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.817371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c47ed0) 00:30:18.002 [2024-12-14 03:11:32.817378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-12-14 03:11:32.817389] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3540, cid 0, qid 0 00:30:18.002 [2024-12-14 03:11:32.817485] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:18.002 [2024-12-14 03:11:32.817491] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:18.002 [2024-12-14 03:11:32.817495] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.817498] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c47ed0): datao=0, datal=4096, cccid=0 00:30:18.002 [2024-12-14 03:11:32.817502] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cb3540) on tqpair(0x1c47ed0): expected_datao=0, payload_size=4096 00:30:18.002 [2024-12-14 03:11:32.817505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.817531] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.817535] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.858454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.002 [2024-12-14 03:11:32.858465] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.002 [2024-12-14 03:11:32.858468] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.858472] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3540) on tqpair=0x1c47ed0 00:30:18.002 [2024-12-14 03:11:32.858478] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:30:18.002 [2024-12-14 03:11:32.858483] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:30:18.002 [2024-12-14 03:11:32.858486] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:30:18.002 [2024-12-14 03:11:32.858490] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:30:18.002 [2024-12-14 03:11:32.858494] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:30:18.002 [2024-12-14 03:11:32.858498] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:30:18.002 [2024-12-14 03:11:32.858509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:18.002 [2024-12-14 03:11:32.858519] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.858522] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.858526] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c47ed0) 00:30:18.002 [2024-12-14 03:11:32.858532] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:18.002 [2024-12-14 03:11:32.858543] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3540, cid 0, qid 0 00:30:18.002 [2024-12-14 03:11:32.858654] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.002 [2024-12-14 03:11:32.858659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.002 [2024-12-14 03:11:32.858662] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.858666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3540) on tqpair=0x1c47ed0 00:30:18.002 [2024-12-14 03:11:32.858671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.858674] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.858677] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c47ed0) 00:30:18.002 [2024-12-14 03:11:32.858682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.002 [2024-12-14 03:11:32.858690] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.858693] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.858696] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c47ed0) 00:30:18.002 [2024-12-14 03:11:32.858701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.002 [2024-12-14 03:11:32.858706] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.858709] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.002 [2024-12-14 03:11:32.858712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c47ed0) 00:30:18.002 [2024-12-14 03:11:32.858717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.003 [2024-12-14 03:11:32.858722] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.858725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.858728] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c47ed0) 00:30:18.003 [2024-12-14 03:11:32.858733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.003 [2024-12-14 03:11:32.858737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:18.003 [2024-12-14 03:11:32.858746] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:18.003 [2024-12-14 03:11:32.858752] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.858755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c47ed0) 00:30:18.003 [2024-12-14 03:11:32.858761] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.003 [2024-12-14 03:11:32.858772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3540, cid 0, qid 0 00:30:18.003 [2024-12-14 03:11:32.858777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb36c0, cid 1, qid 0 00:30:18.003 [2024-12-14 03:11:32.858781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3840, cid 2, qid 0 00:30:18.003 [2024-12-14 03:11:32.858785] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb39c0, cid 3, qid 0 00:30:18.003 [2024-12-14 03:11:32.858789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3b40, cid 4, qid 0 00:30:18.003 [2024-12-14 03:11:32.858885] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.003 [2024-12-14 03:11:32.858891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.003 [2024-12-14 03:11:32.858894] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.858897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3b40) on tqpair=0x1c47ed0 00:30:18.003 [2024-12-14 03:11:32.858901] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:30:18.003 [2024-12-14 03:11:32.858906] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:18.003 [2024-12-14 03:11:32.858916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:30:18.003 [2024-12-14 03:11:32.858922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:18.003 [2024-12-14 03:11:32.858928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.858931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.858936] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c47ed0) 00:30:18.003 [2024-12-14 03:11:32.858941] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:18.003 [2024-12-14 03:11:32.858951] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3b40, cid 4, qid 0 00:30:18.003 [2024-12-14 03:11:32.859055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.003 [2024-12-14 03:11:32.859061] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.003 [2024-12-14 03:11:32.859064] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.859067] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3b40) on tqpair=0x1c47ed0 00:30:18.003 [2024-12-14 03:11:32.859117] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:30:18.003 [2024-12-14 03:11:32.859125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:18.003 [2024-12-14 03:11:32.859132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.859135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c47ed0) 00:30:18.003 [2024-12-14 03:11:32.859140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.003 [2024-12-14 03:11:32.859150] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3b40, cid 4, qid 0 00:30:18.003 [2024-12-14 03:11:32.859233] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:18.003 [2024-12-14 03:11:32.859239] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:18.003 [2024-12-14 03:11:32.859242] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.859245] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c47ed0): datao=0, datal=4096, cccid=4 00:30:18.003 [2024-12-14 03:11:32.859249] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cb3b40) on tqpair(0x1c47ed0): expected_datao=0, payload_size=4096 00:30:18.003 [2024-12-14 03:11:32.859253] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.859259] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.859262] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.859306] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.003 [2024-12-14 03:11:32.863316] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.003 [2024-12-14 03:11:32.863321] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.863325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3b40) on tqpair=0x1c47ed0 00:30:18.003 [2024-12-14 03:11:32.863335] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:30:18.003 [2024-12-14 03:11:32.863345] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:30:18.003 [2024-12-14 03:11:32.863355] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:30:18.003 [2024-12-14 03:11:32.863361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.863364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c47ed0) 00:30:18.003 [2024-12-14 03:11:32.863370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.003 [2024-12-14 03:11:32.863381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3b40, cid 4, qid 0 00:30:18.003 [2024-12-14 03:11:32.863554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:18.003 [2024-12-14 03:11:32.863560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:18.003 [2024-12-14 03:11:32.863564] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.863568] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c47ed0): datao=0, datal=4096, cccid=4 00:30:18.003 [2024-12-14 03:11:32.863572] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cb3b40) on tqpair(0x1c47ed0): expected_datao=0, payload_size=4096 00:30:18.003 [2024-12-14 03:11:32.863575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.863581] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.863584] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.863611] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.003 [2024-12-14 03:11:32.863616] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.003 [2024-12-14 03:11:32.863619] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.863623] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3b40) on tqpair=0x1c47ed0 00:30:18.003 [2024-12-14 03:11:32.863632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:18.003 [2024-12-14 03:11:32.863640] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:18.003 [2024-12-14 03:11:32.863646] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.863649] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c47ed0) 00:30:18.003 [2024-12-14 03:11:32.863655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.003 [2024-12-14 03:11:32.863665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3b40, cid 4, qid 0 00:30:18.003 [2024-12-14 03:11:32.863737] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:18.003 [2024-12-14 03:11:32.863743] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:18.003 [2024-12-14 03:11:32.863746] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.863749] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c47ed0): datao=0, datal=4096, cccid=4 00:30:18.003 [2024-12-14 03:11:32.863753] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cb3b40) on tqpair(0x1c47ed0): expected_datao=0, payload_size=4096 00:30:18.003 [2024-12-14 03:11:32.863756] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.863762] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.863765] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.863811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.003 [2024-12-14 03:11:32.863817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.003 [2024-12-14 03:11:32.863820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.863823] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3b40) on tqpair=0x1c47ed0 00:30:18.003 [2024-12-14 03:11:32.863829] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:18.003 [2024-12-14 03:11:32.863837] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:30:18.003 [2024-12-14 03:11:32.863844] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:30:18.003 [2024-12-14 03:11:32.863849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:18.003 [2024-12-14 03:11:32.863853] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:18.003 [2024-12-14 03:11:32.863859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:30:18.003 [2024-12-14 03:11:32.863864] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:30:18.003 [2024-12-14 03:11:32.863868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:30:18.003 [2024-12-14 03:11:32.863873] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:30:18.003 [2024-12-14 03:11:32.863885] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.863888] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c47ed0) 00:30:18.003 [2024-12-14 03:11:32.863894] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.003 [2024-12-14 03:11:32.863900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.003 [2024-12-14 03:11:32.863903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.863906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c47ed0) 00:30:18.004 [2024-12-14 03:11:32.863911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.004 [2024-12-14 03:11:32.863923] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3b40, cid 4, qid 0 00:30:18.004 [2024-12-14 03:11:32.863928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3cc0, cid 5, qid 0 00:30:18.004 [2024-12-14 03:11:32.864044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.004 [2024-12-14 03:11:32.864050] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.004 [2024-12-14 03:11:32.864053] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864056] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3b40) on tqpair=0x1c47ed0 00:30:18.004 [2024-12-14 03:11:32.864062] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.004 [2024-12-14 03:11:32.864066] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.004 [2024-12-14 03:11:32.864069] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3cc0) on tqpair=0x1c47ed0 00:30:18.004 [2024-12-14 03:11:32.864080] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864083] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c47ed0) 00:30:18.004 [2024-12-14 03:11:32.864089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.004 [2024-12-14 03:11:32.864098] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3cc0, cid 5, qid 0 00:30:18.004 [2024-12-14 03:11:32.864194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.004 [2024-12-14 03:11:32.864199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.004 [2024-12-14 03:11:32.864202] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864206] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3cc0) on tqpair=0x1c47ed0 00:30:18.004 [2024-12-14 03:11:32.864213] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864217] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c47ed0) 00:30:18.004 [2024-12-14 03:11:32.864222] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.004 [2024-12-14 03:11:32.864231] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3cc0, cid 5, qid 0 00:30:18.004 [2024-12-14 03:11:32.864295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.004 [2024-12-14 03:11:32.864301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.004 [2024-12-14 03:11:32.864304] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3cc0) on tqpair=0x1c47ed0 00:30:18.004 [2024-12-14 03:11:32.864319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864323] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c47ed0) 00:30:18.004 [2024-12-14 03:11:32.864328] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.004 [2024-12-14 03:11:32.864338] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3cc0, cid 5, qid 0 00:30:18.004 [2024-12-14 03:11:32.864448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.004 [2024-12-14 03:11:32.864453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.004 [2024-12-14 03:11:32.864456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3cc0) on tqpair=0x1c47ed0 00:30:18.004 [2024-12-14 03:11:32.864471] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864475] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c47ed0) 00:30:18.004 [2024-12-14 03:11:32.864480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.004 [2024-12-14 03:11:32.864486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c47ed0) 00:30:18.004 [2024-12-14 03:11:32.864494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.004 [2024-12-14 03:11:32.864500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864504] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1c47ed0) 00:30:18.004 [2024-12-14 03:11:32.864509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.004 [2024-12-14 03:11:32.864514] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864518] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c47ed0) 00:30:18.004 [2024-12-14 03:11:32.864523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.004 [2024-12-14 03:11:32.864532] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3cc0, cid 5, qid 0 00:30:18.004 [2024-12-14 03:11:32.864537] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3b40, cid 4, qid 0 00:30:18.004 [2024-12-14 03:11:32.864541] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3e40, cid 6, qid 0 00:30:18.004 [2024-12-14 03:11:32.864545] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3fc0, cid 7, qid 0 00:30:18.004 [2024-12-14 03:11:32.864679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:18.004 [2024-12-14 03:11:32.864685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:18.004 [2024-12-14 03:11:32.864688] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864691] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c47ed0): datao=0, datal=8192, cccid=5 00:30:18.004 [2024-12-14 03:11:32.864695] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cb3cc0) on tqpair(0x1c47ed0): expected_datao=0, payload_size=8192 00:30:18.004 [2024-12-14 03:11:32.864702] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864755] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864759] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:18.004 [2024-12-14 03:11:32.864769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:18.004 [2024-12-14 03:11:32.864771] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864774] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c47ed0): datao=0, datal=512, cccid=4 00:30:18.004 [2024-12-14 03:11:32.864778] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cb3b40) on tqpair(0x1c47ed0): expected_datao=0, payload_size=512 00:30:18.004 [2024-12-14 03:11:32.864782] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864787] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864790] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864795] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:18.004 [2024-12-14 03:11:32.864799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:18.004 [2024-12-14 03:11:32.864802] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864805] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c47ed0): datao=0, datal=512, cccid=6 00:30:18.004 [2024-12-14 03:11:32.864809] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cb3e40) on tqpair(0x1c47ed0): expected_datao=0, payload_size=512 00:30:18.004 [2024-12-14 03:11:32.864812] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864818] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864821] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:18.004 [2024-12-14 03:11:32.864830] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:18.004 [2024-12-14 03:11:32.864833] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864836] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c47ed0): datao=0, datal=4096, cccid=7 00:30:18.004 [2024-12-14 03:11:32.864839] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cb3fc0) on tqpair(0x1c47ed0): expected_datao=0, payload_size=4096 00:30:18.004 [2024-12-14 03:11:32.864843] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864849] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864852] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864859] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.004 [2024-12-14 03:11:32.864864] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.004 [2024-12-14 03:11:32.864867] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864870] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3cc0) on tqpair=0x1c47ed0 00:30:18.004 [2024-12-14 03:11:32.864880] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.004 [2024-12-14 03:11:32.864885] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.004 [2024-12-14 03:11:32.864888] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.004 [2024-12-14 03:11:32.864891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3b40) on tqpair=0x1c47ed0 00:30:18.004 [2024-12-14 03:11:32.864899] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.004 [2024-12-14 03:11:32.864904] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.005 [2024-12-14 03:11:32.864907] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.005 [2024-12-14 03:11:32.864910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3e40) on tqpair=0x1c47ed0 00:30:18.005 [2024-12-14 03:11:32.864916] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.005 [2024-12-14 03:11:32.864922] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.005 [2024-12-14 03:11:32.864925] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.005 [2024-12-14 03:11:32.864928] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3fc0) on tqpair=0x1c47ed0 00:30:18.005 ===================================================== 00:30:18.005 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:18.005 ===================================================== 00:30:18.005 Controller Capabilities/Features 00:30:18.005 ================================ 00:30:18.005 Vendor ID: 8086 00:30:18.005 Subsystem Vendor ID: 8086 00:30:18.005 Serial Number: SPDK00000000000001 00:30:18.005 Model Number: SPDK bdev Controller 00:30:18.005 Firmware Version: 25.01 00:30:18.005 Recommended Arb Burst: 6 00:30:18.005 IEEE OUI Identifier: e4 d2 5c 00:30:18.005 Multi-path I/O 00:30:18.005 May have multiple subsystem ports: Yes 00:30:18.005 May have multiple controllers: Yes 00:30:18.005 Associated with SR-IOV VF: No 00:30:18.005 Max Data Transfer Size: 131072 00:30:18.005 Max Number of Namespaces: 32 00:30:18.005 Max Number of I/O Queues: 127 00:30:18.005 NVMe Specification Version (VS): 1.3 00:30:18.005 NVMe Specification Version (Identify): 1.3 00:30:18.005 Maximum Queue Entries: 128 00:30:18.005 Contiguous Queues Required: Yes 00:30:18.005 Arbitration Mechanisms Supported 00:30:18.005 Weighted Round Robin: Not Supported 00:30:18.005 Vendor Specific: Not Supported 00:30:18.005 Reset Timeout: 15000 ms 00:30:18.005 Doorbell Stride: 4 bytes 00:30:18.005 NVM Subsystem Reset: Not Supported 00:30:18.005 Command Sets Supported 00:30:18.005 NVM Command Set: Supported 00:30:18.005 Boot Partition: Not Supported 00:30:18.005 Memory Page Size Minimum: 4096 bytes 00:30:18.005 Memory Page Size Maximum: 4096 bytes 00:30:18.005 Persistent Memory Region: Not Supported 00:30:18.005 Optional Asynchronous Events Supported 00:30:18.005 Namespace Attribute Notices: Supported 00:30:18.005 Firmware Activation Notices: Not Supported 00:30:18.005 ANA Change Notices: Not Supported 00:30:18.005 PLE Aggregate Log Change Notices: Not Supported 00:30:18.005 LBA Status Info Alert Notices: Not Supported 00:30:18.005 EGE Aggregate Log Change Notices: Not Supported 00:30:18.005 Normal NVM Subsystem Shutdown event: Not Supported 00:30:18.005 Zone Descriptor Change Notices: Not Supported 00:30:18.005 Discovery Log Change Notices: Not Supported 00:30:18.005 Controller Attributes 00:30:18.005 128-bit Host Identifier: Supported 00:30:18.005 Non-Operational Permissive Mode: Not Supported 00:30:18.005 NVM Sets: Not Supported 00:30:18.005 Read Recovery Levels: Not Supported 00:30:18.005 Endurance Groups: Not Supported 00:30:18.005 Predictable Latency Mode: Not Supported 00:30:18.005 Traffic Based Keep ALive: Not Supported 00:30:18.005 Namespace Granularity: Not Supported 00:30:18.005 SQ Associations: Not Supported 00:30:18.005 UUID List: Not Supported 00:30:18.005 Multi-Domain Subsystem: Not Supported 00:30:18.005 Fixed Capacity Management: Not Supported 00:30:18.005 Variable Capacity Management: Not Supported 00:30:18.005 Delete Endurance Group: Not Supported 00:30:18.005 Delete NVM Set: Not Supported 00:30:18.005 Extended LBA Formats Supported: Not Supported 00:30:18.005 Flexible Data Placement Supported: Not Supported 00:30:18.005 00:30:18.005 Controller Memory Buffer Support 00:30:18.005 ================================ 00:30:18.005 Supported: No 00:30:18.005 00:30:18.005 Persistent Memory Region Support 00:30:18.005 ================================ 00:30:18.005 Supported: No 00:30:18.005 00:30:18.005 Admin Command Set Attributes 00:30:18.005 ============================ 00:30:18.005 Security Send/Receive: Not Supported 00:30:18.005 Format NVM: Not Supported 00:30:18.005 Firmware Activate/Download: Not Supported 00:30:18.005 Namespace Management: Not Supported 00:30:18.005 Device Self-Test: Not Supported 00:30:18.005 Directives: Not Supported 00:30:18.005 NVMe-MI: Not Supported 00:30:18.005 Virtualization Management: Not Supported 00:30:18.005 Doorbell Buffer Config: Not Supported 00:30:18.005 Get LBA Status Capability: Not Supported 00:30:18.005 Command & Feature Lockdown Capability: Not Supported 00:30:18.005 Abort Command Limit: 4 00:30:18.005 Async Event Request Limit: 4 00:30:18.005 Number of Firmware Slots: N/A 00:30:18.005 Firmware Slot 1 Read-Only: N/A 00:30:18.005 Firmware Activation Without Reset: N/A 00:30:18.005 Multiple Update Detection Support: N/A 00:30:18.005 Firmware Update Granularity: No Information Provided 00:30:18.005 Per-Namespace SMART Log: No 00:30:18.005 Asymmetric Namespace Access Log Page: Not Supported 00:30:18.005 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:18.005 Command Effects Log Page: Supported 00:30:18.005 Get Log Page Extended Data: Supported 00:30:18.005 Telemetry Log Pages: Not Supported 00:30:18.005 Persistent Event Log Pages: Not Supported 00:30:18.005 Supported Log Pages Log Page: May Support 00:30:18.005 Commands Supported & Effects Log Page: Not Supported 00:30:18.005 Feature Identifiers & Effects Log Page:May Support 00:30:18.005 NVMe-MI Commands & Effects Log Page: May Support 00:30:18.005 Data Area 4 for Telemetry Log: Not Supported 00:30:18.005 Error Log Page Entries Supported: 128 00:30:18.005 Keep Alive: Supported 00:30:18.005 Keep Alive Granularity: 10000 ms 00:30:18.005 00:30:18.005 NVM Command Set Attributes 00:30:18.005 ========================== 00:30:18.005 Submission Queue Entry Size 00:30:18.005 Max: 64 00:30:18.005 Min: 64 00:30:18.005 Completion Queue Entry Size 00:30:18.005 Max: 16 00:30:18.005 Min: 16 00:30:18.005 Number of Namespaces: 32 00:30:18.005 Compare Command: Supported 00:30:18.005 Write Uncorrectable Command: Not Supported 00:30:18.005 Dataset Management Command: Supported 00:30:18.005 Write Zeroes Command: Supported 00:30:18.005 Set Features Save Field: Not Supported 00:30:18.005 Reservations: Supported 00:30:18.005 Timestamp: Not Supported 00:30:18.005 Copy: Supported 00:30:18.005 Volatile Write Cache: Present 00:30:18.005 Atomic Write Unit (Normal): 1 00:30:18.005 Atomic Write Unit (PFail): 1 00:30:18.005 Atomic Compare & Write Unit: 1 00:30:18.005 Fused Compare & Write: Supported 00:30:18.005 Scatter-Gather List 00:30:18.005 SGL Command Set: Supported 00:30:18.005 SGL Keyed: Supported 00:30:18.005 SGL Bit Bucket Descriptor: Not Supported 00:30:18.005 SGL Metadata Pointer: Not Supported 00:30:18.005 Oversized SGL: Not Supported 00:30:18.005 SGL Metadata Address: Not Supported 00:30:18.005 SGL Offset: Supported 00:30:18.005 Transport SGL Data Block: Not Supported 00:30:18.005 Replay Protected Memory Block: Not Supported 00:30:18.005 00:30:18.005 Firmware Slot Information 00:30:18.005 ========================= 00:30:18.005 Active slot: 1 00:30:18.005 Slot 1 Firmware Revision: 25.01 00:30:18.005 00:30:18.005 00:30:18.005 Commands Supported and Effects 00:30:18.005 ============================== 00:30:18.005 Admin Commands 00:30:18.005 -------------- 00:30:18.005 Get Log Page (02h): Supported 00:30:18.005 Identify (06h): Supported 00:30:18.005 Abort (08h): Supported 00:30:18.005 Set Features (09h): Supported 00:30:18.005 Get Features (0Ah): Supported 00:30:18.005 Asynchronous Event Request (0Ch): Supported 00:30:18.005 Keep Alive (18h): Supported 00:30:18.005 I/O Commands 00:30:18.005 ------------ 00:30:18.005 Flush (00h): Supported LBA-Change 00:30:18.005 Write (01h): Supported LBA-Change 00:30:18.005 Read (02h): Supported 00:30:18.005 Compare (05h): Supported 00:30:18.005 Write Zeroes (08h): Supported LBA-Change 00:30:18.005 Dataset Management (09h): Supported LBA-Change 00:30:18.005 Copy (19h): Supported LBA-Change 00:30:18.005 00:30:18.005 Error Log 00:30:18.005 ========= 00:30:18.005 00:30:18.005 Arbitration 00:30:18.005 =========== 00:30:18.005 Arbitration Burst: 1 00:30:18.005 00:30:18.005 Power Management 00:30:18.005 ================ 00:30:18.005 Number of Power States: 1 00:30:18.005 Current Power State: Power State #0 00:30:18.005 Power State #0: 00:30:18.005 Max Power: 0.00 W 00:30:18.005 Non-Operational State: Operational 00:30:18.005 Entry Latency: Not Reported 00:30:18.005 Exit Latency: Not Reported 00:30:18.005 Relative Read Throughput: 0 00:30:18.005 Relative Read Latency: 0 00:30:18.005 Relative Write Throughput: 0 00:30:18.005 Relative Write Latency: 0 00:30:18.005 Idle Power: Not Reported 00:30:18.005 Active Power: Not Reported 00:30:18.005 Non-Operational Permissive Mode: Not Supported 00:30:18.005 00:30:18.005 Health Information 00:30:18.005 ================== 00:30:18.005 Critical Warnings: 00:30:18.005 Available Spare Space: OK 00:30:18.005 Temperature: OK 00:30:18.005 Device Reliability: OK 00:30:18.005 Read Only: No 00:30:18.005 Volatile Memory Backup: OK 00:30:18.005 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:18.006 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:18.006 Available Spare: 0% 00:30:18.006 Available Spare Threshold: 0% 00:30:18.006 Life Percentage Used:[2024-12-14 03:11:32.865006] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.865010] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c47ed0) 00:30:18.006 [2024-12-14 03:11:32.865016] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.006 [2024-12-14 03:11:32.865027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb3fc0, cid 7, qid 0 00:30:18.006 [2024-12-14 03:11:32.865144] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.006 [2024-12-14 03:11:32.865150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.006 [2024-12-14 03:11:32.865153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.865156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3fc0) on tqpair=0x1c47ed0 00:30:18.006 [2024-12-14 03:11:32.865181] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:30:18.006 [2024-12-14 03:11:32.865189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3540) on tqpair=0x1c47ed0 00:30:18.006 [2024-12-14 03:11:32.865194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.006 [2024-12-14 03:11:32.865199] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb36c0) on tqpair=0x1c47ed0 00:30:18.006 [2024-12-14 03:11:32.865203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.006 [2024-12-14 03:11:32.865207] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb3840) on tqpair=0x1c47ed0 00:30:18.006 [2024-12-14 03:11:32.865211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.006 [2024-12-14 03:11:32.865215] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb39c0) on tqpair=0x1c47ed0 00:30:18.006 [2024-12-14 03:11:32.865219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.006 [2024-12-14 03:11:32.865226] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.865229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.865232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c47ed0) 00:30:18.006 [2024-12-14 03:11:32.865238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.006 [2024-12-14 03:11:32.865249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb39c0, cid 3, qid 0 00:30:18.006 [2024-12-14 03:11:32.865328] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.006 [2024-12-14 03:11:32.865335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.006 [2024-12-14 03:11:32.865338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.865341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb39c0) on tqpair=0x1c47ed0 00:30:18.006 [2024-12-14 03:11:32.865347] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.865350] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.865352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c47ed0) 00:30:18.006 [2024-12-14 03:11:32.865358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.006 [2024-12-14 03:11:32.865371] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb39c0, cid 3, qid 0 00:30:18.006 [2024-12-14 03:11:32.865465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.006 [2024-12-14 03:11:32.865471] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.006 [2024-12-14 03:11:32.865473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.865476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb39c0) on tqpair=0x1c47ed0 00:30:18.006 [2024-12-14 03:11:32.865480] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:30:18.006 [2024-12-14 03:11:32.865485] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:30:18.006 [2024-12-14 03:11:32.865492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.865496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.865499] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c47ed0) 00:30:18.006 [2024-12-14 03:11:32.865504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.006 [2024-12-14 03:11:32.865514] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb39c0, cid 3, qid 0 00:30:18.006 [2024-12-14 03:11:32.865576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.006 [2024-12-14 03:11:32.865581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.006 [2024-12-14 03:11:32.865584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.865587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb39c0) on tqpair=0x1c47ed0 00:30:18.006 [2024-12-14 03:11:32.865595] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.865599] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.865602] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c47ed0) 00:30:18.006 [2024-12-14 03:11:32.865607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.006 [2024-12-14 03:11:32.865617] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb39c0, cid 3, qid 0 00:30:18.006 [2024-12-14 03:11:32.865717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.006 [2024-12-14 03:11:32.865722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.006 [2024-12-14 03:11:32.865725] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.865728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb39c0) on tqpair=0x1c47ed0 00:30:18.006 [2024-12-14 03:11:32.865736] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.865740] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.865742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c47ed0) 00:30:18.006 [2024-12-14 03:11:32.865748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.006 [2024-12-14 03:11:32.865757] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb39c0, cid 3, qid 0 00:30:18.006 [2024-12-14 03:11:32.865818] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.006 [2024-12-14 03:11:32.865823] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.006 [2024-12-14 03:11:32.865826] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.865830] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb39c0) on tqpair=0x1c47ed0 00:30:18.006 [2024-12-14 03:11:32.865838] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.865841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.865844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c47ed0) 00:30:18.006 [2024-12-14 03:11:32.865850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.006 [2024-12-14 03:11:32.865861] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb39c0, cid 3, qid 0 00:30:18.006 [2024-12-14 03:11:32.865967] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.006 [2024-12-14 03:11:32.865972] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.006 [2024-12-14 03:11:32.865975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.865978] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb39c0) on tqpair=0x1c47ed0 00:30:18.006 [2024-12-14 03:11:32.865987] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.865990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.865993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c47ed0) 00:30:18.006 [2024-12-14 03:11:32.865999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.006 [2024-12-14 03:11:32.866008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb39c0, cid 3, qid 0 00:30:18.006 [2024-12-14 03:11:32.866066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.006 [2024-12-14 03:11:32.866072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.006 [2024-12-14 03:11:32.866074] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.866077] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb39c0) on tqpair=0x1c47ed0 00:30:18.006 [2024-12-14 03:11:32.866085] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.866089] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.866092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c47ed0) 00:30:18.006 [2024-12-14 03:11:32.866097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.006 [2024-12-14 03:11:32.866106] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb39c0, cid 3, qid 0 00:30:18.006 [2024-12-14 03:11:32.866170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.006 [2024-12-14 03:11:32.866175] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.006 [2024-12-14 03:11:32.866178] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.866181] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb39c0) on tqpair=0x1c47ed0 00:30:18.006 [2024-12-14 03:11:32.866189] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.866192] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.866195] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c47ed0) 00:30:18.006 [2024-12-14 03:11:32.866201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.006 [2024-12-14 03:11:32.866210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb39c0, cid 3, qid 0 00:30:18.006 [2024-12-14 03:11:32.866271] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.006 [2024-12-14 03:11:32.866276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.006 [2024-12-14 03:11:32.866279] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.866283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb39c0) on tqpair=0x1c47ed0 00:30:18.006 [2024-12-14 03:11:32.866290] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.866294] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.006 [2024-12-14 03:11:32.866297] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c47ed0) 00:30:18.006 [2024-12-14 03:11:32.866302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.006 [2024-12-14 03:11:32.866317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb39c0, cid 3, qid 0 00:30:18.006 [2024-12-14 03:11:32.866434] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.007 [2024-12-14 03:11:32.866441] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.007 [2024-12-14 03:11:32.866443] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.866447] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb39c0) on tqpair=0x1c47ed0 00:30:18.007 [2024-12-14 03:11:32.866455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.866458] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.866461] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c47ed0) 00:30:18.007 [2024-12-14 03:11:32.866467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.007 [2024-12-14 03:11:32.866476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb39c0, cid 3, qid 0 00:30:18.007 [2024-12-14 03:11:32.866538] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.007 [2024-12-14 03:11:32.866544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.007 [2024-12-14 03:11:32.866547] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.866550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb39c0) on tqpair=0x1c47ed0 00:30:18.007 [2024-12-14 03:11:32.866558] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.866561] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.866564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c47ed0) 00:30:18.007 [2024-12-14 03:11:32.866569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.007 [2024-12-14 03:11:32.866579] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb39c0, cid 3, qid 0 00:30:18.007 [2024-12-14 03:11:32.866673] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.007 [2024-12-14 03:11:32.866679] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.007 [2024-12-14 03:11:32.866682] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.866685] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb39c0) on tqpair=0x1c47ed0 00:30:18.007 [2024-12-14 03:11:32.866692] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.866696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.866699] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c47ed0) 00:30:18.007 [2024-12-14 03:11:32.866704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.007 [2024-12-14 03:11:32.866713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb39c0, cid 3, qid 0 00:30:18.007 [2024-12-14 03:11:32.866825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.007 [2024-12-14 03:11:32.866831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.007 [2024-12-14 03:11:32.866834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.866837] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb39c0) on tqpair=0x1c47ed0 00:30:18.007 [2024-12-14 03:11:32.866844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.866848] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.866851] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c47ed0) 00:30:18.007 [2024-12-14 03:11:32.866856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.007 [2024-12-14 03:11:32.866865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb39c0, cid 3, qid 0 00:30:18.007 [2024-12-14 03:11:32.866926] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.007 [2024-12-14 03:11:32.866932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.007 [2024-12-14 03:11:32.866935] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.866938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb39c0) on tqpair=0x1c47ed0 00:30:18.007 [2024-12-14 03:11:32.866946] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.866949] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.866952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c47ed0) 00:30:18.007 [2024-12-14 03:11:32.866957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.007 [2024-12-14 03:11:32.866966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb39c0, cid 3, qid 0 00:30:18.007 [2024-12-14 03:11:32.867026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.007 [2024-12-14 03:11:32.867032] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.007 [2024-12-14 03:11:32.867034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.867038] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb39c0) on tqpair=0x1c47ed0 00:30:18.007 [2024-12-14 03:11:32.867046] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.867049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.867052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c47ed0) 00:30:18.007 [2024-12-14 03:11:32.867057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.007 [2024-12-14 03:11:32.867067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb39c0, cid 3, qid 0 00:30:18.007 [2024-12-14 03:11:32.867127] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.007 [2024-12-14 03:11:32.867133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.007 [2024-12-14 03:11:32.867136] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.867139] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb39c0) on tqpair=0x1c47ed0 00:30:18.007 [2024-12-14 03:11:32.867147] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.867150] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.867153] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c47ed0) 00:30:18.007 [2024-12-14 03:11:32.867158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.007 [2024-12-14 03:11:32.867167] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb39c0, cid 3, qid 0 00:30:18.007 [2024-12-14 03:11:32.867279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.007 [2024-12-14 03:11:32.867285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.007 [2024-12-14 03:11:32.867288] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.867291] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb39c0) on tqpair=0x1c47ed0 00:30:18.007 [2024-12-14 03:11:32.867298] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.867302] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.867305] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c47ed0) 00:30:18.007 [2024-12-14 03:11:32.867310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.007 [2024-12-14 03:11:32.871330] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cb39c0, cid 3, qid 0 00:30:18.007 [2024-12-14 03:11:32.871396] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.007 [2024-12-14 03:11:32.871404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.007 [2024-12-14 03:11:32.871407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.007 [2024-12-14 03:11:32.871410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cb39c0) on tqpair=0x1c47ed0 00:30:18.007 [2024-12-14 03:11:32.871418] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:30:18.007 0% 00:30:18.007 Data Units Read: 0 00:30:18.007 Data Units Written: 0 00:30:18.007 Host Read Commands: 0 00:30:18.007 Host Write Commands: 0 00:30:18.007 Controller Busy Time: 0 minutes 00:30:18.007 Power Cycles: 0 00:30:18.007 Power On Hours: 0 hours 00:30:18.007 Unsafe Shutdowns: 0 00:30:18.007 Unrecoverable Media Errors: 0 00:30:18.007 Lifetime Error Log Entries: 0 00:30:18.007 Warning Temperature Time: 0 minutes 00:30:18.007 Critical Temperature Time: 0 minutes 00:30:18.007 00:30:18.007 Number of Queues 00:30:18.007 ================ 00:30:18.007 Number of I/O Submission Queues: 127 00:30:18.007 Number of I/O Completion Queues: 127 00:30:18.007 00:30:18.007 Active Namespaces 00:30:18.007 ================= 00:30:18.007 Namespace ID:1 00:30:18.007 Error Recovery Timeout: Unlimited 00:30:18.007 Command Set Identifier: NVM (00h) 00:30:18.007 Deallocate: Supported 00:30:18.007 Deallocated/Unwritten Error: Not Supported 00:30:18.007 Deallocated Read Value: Unknown 00:30:18.007 Deallocate in Write Zeroes: Not Supported 00:30:18.007 Deallocated Guard Field: 0xFFFF 00:30:18.007 Flush: Supported 00:30:18.007 Reservation: Supported 00:30:18.007 Namespace Sharing Capabilities: Multiple Controllers 00:30:18.007 Size (in LBAs): 131072 (0GiB) 00:30:18.007 Capacity (in LBAs): 131072 (0GiB) 00:30:18.007 Utilization (in LBAs): 131072 (0GiB) 00:30:18.007 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:18.007 EUI64: ABCDEF0123456789 00:30:18.007 UUID: ebbfd9b9-eb24-465f-958d-cc38bee2789f 00:30:18.007 Thin Provisioning: Not Supported 00:30:18.007 Per-NS Atomic Units: Yes 00:30:18.007 Atomic Boundary Size (Normal): 0 00:30:18.007 Atomic Boundary Size (PFail): 0 00:30:18.007 Atomic Boundary Offset: 0 00:30:18.007 Maximum Single Source Range Length: 65535 00:30:18.007 Maximum Copy Length: 65535 00:30:18.007 Maximum Source Range Count: 1 00:30:18.007 NGUID/EUI64 Never Reused: No 00:30:18.007 Namespace Write Protected: No 00:30:18.007 Number of LBA Formats: 1 00:30:18.007 Current LBA Format: LBA Format #00 00:30:18.007 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:18.007 00:30:18.007 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:18.007 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:18.007 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.007 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:18.007 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.007 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:18.007 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:18.007 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:18.007 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:30:18.008 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:18.008 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:30:18.008 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:18.008 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:18.008 rmmod nvme_tcp 00:30:18.008 rmmod nvme_fabrics 00:30:18.008 rmmod nvme_keyring 00:30:18.008 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:18.008 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:30:18.008 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:30:18.008 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 355814 ']' 00:30:18.008 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 355814 00:30:18.008 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 355814 ']' 00:30:18.008 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 355814 00:30:18.008 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:30:18.008 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:18.008 03:11:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 355814 00:30:18.008 03:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:18.008 03:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:18.008 03:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 355814' 00:30:18.008 killing process with pid 355814 00:30:18.008 03:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 355814 00:30:18.008 03:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 355814 00:30:18.267 03:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:18.267 03:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:18.267 03:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:18.267 03:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:30:18.267 03:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:18.267 03:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:30:18.267 03:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:30:18.267 03:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:18.267 03:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:18.267 03:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.267 03:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:18.267 03:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.171 03:11:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:20.171 00:30:20.171 real 0m9.166s 00:30:20.171 user 0m5.151s 00:30:20.171 sys 0m4.749s 00:30:20.171 03:11:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:20.171 03:11:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:20.171 ************************************ 00:30:20.171 END TEST nvmf_identify 00:30:20.171 ************************************ 00:30:20.171 03:11:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:20.171 03:11:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:20.171 03:11:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:20.171 03:11:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.430 ************************************ 00:30:20.430 START TEST nvmf_perf 00:30:20.430 ************************************ 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:20.430 * Looking for test storage... 00:30:20.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:20.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.430 --rc genhtml_branch_coverage=1 00:30:20.430 --rc genhtml_function_coverage=1 00:30:20.430 --rc genhtml_legend=1 00:30:20.430 --rc geninfo_all_blocks=1 00:30:20.430 --rc geninfo_unexecuted_blocks=1 00:30:20.430 00:30:20.430 ' 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:20.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.430 --rc genhtml_branch_coverage=1 00:30:20.430 --rc genhtml_function_coverage=1 00:30:20.430 --rc genhtml_legend=1 00:30:20.430 --rc geninfo_all_blocks=1 00:30:20.430 --rc geninfo_unexecuted_blocks=1 00:30:20.430 00:30:20.430 ' 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:20.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.430 --rc genhtml_branch_coverage=1 00:30:20.430 --rc genhtml_function_coverage=1 00:30:20.430 --rc genhtml_legend=1 00:30:20.430 --rc geninfo_all_blocks=1 00:30:20.430 --rc geninfo_unexecuted_blocks=1 00:30:20.430 00:30:20.430 ' 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:20.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.430 --rc genhtml_branch_coverage=1 00:30:20.430 --rc genhtml_function_coverage=1 00:30:20.430 --rc genhtml_legend=1 00:30:20.430 --rc geninfo_all_blocks=1 00:30:20.430 --rc geninfo_unexecuted_blocks=1 00:30:20.430 00:30:20.430 ' 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:20.430 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:20.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:20.431 03:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:27.001 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:27.001 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:27.001 Found net devices under 0000:af:00.0: cvl_0_0 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:27.001 Found net devices under 0000:af:00.1: cvl_0_1 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:27.001 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:27.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:27.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:30:27.002 00:30:27.002 --- 10.0.0.2 ping statistics --- 00:30:27.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.002 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:27.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:27.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:30:27.002 00:30:27.002 --- 10.0.0.1 ping statistics --- 00:30:27.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.002 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=358080 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 358080 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 358080 ']' 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:27.002 [2024-12-14 03:11:41.493013] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:30:27.002 [2024-12-14 03:11:41.493054] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:27.002 [2024-12-14 03:11:41.571751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:27.002 [2024-12-14 03:11:41.594601] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:27.002 [2024-12-14 03:11:41.594637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:27.002 [2024-12-14 03:11:41.594644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:27.002 [2024-12-14 03:11:41.594650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:27.002 [2024-12-14 03:11:41.594655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:27.002 [2024-12-14 03:11:41.595933] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.002 [2024-12-14 03:11:41.596043] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:27.002 [2024-12-14 03:11:41.596148] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.002 [2024-12-14 03:11:41.596150] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:27.002 03:11:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:30.287 03:11:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:30.287 03:11:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:30.287 03:11:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:30:30.287 03:11:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:30.287 03:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:30.287 03:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:30:30.287 03:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:30.287 03:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:30.287 03:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:30.287 [2024-12-14 03:11:45.334427] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:30.287 03:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:30.546 03:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:30.546 03:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:30.805 03:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:30.805 03:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:31.063 03:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:31.063 [2024-12-14 03:11:46.115810] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.063 03:11:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:31.321 03:11:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:30:31.321 03:11:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:30:31.321 03:11:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:31.321 03:11:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:30:32.697 Initializing NVMe Controllers 00:30:32.697 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:30:32.697 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:30:32.697 Initialization complete. Launching workers. 00:30:32.697 ======================================================== 00:30:32.697 Latency(us) 00:30:32.697 Device Information : IOPS MiB/s Average min max 00:30:32.697 PCIE (0000:5e:00.0) NSID 1 from core 0: 99311.81 387.94 321.60 37.93 4300.24 00:30:32.697 ======================================================== 00:30:32.697 Total : 99311.81 387.94 321.60 37.93 4300.24 00:30:32.697 00:30:32.697 03:11:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:34.073 Initializing NVMe Controllers 00:30:34.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:34.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:34.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:34.073 Initialization complete. Launching workers. 00:30:34.073 ======================================================== 00:30:34.073 Latency(us) 00:30:34.073 Device Information : IOPS MiB/s Average min max 00:30:34.073 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 103.00 0.40 9911.06 106.65 44797.04 00:30:34.073 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19681.71 7206.42 47886.49 00:30:34.073 ======================================================== 00:30:34.073 Total : 154.00 0.60 13146.79 106.65 47886.49 00:30:34.073 00:30:34.073 03:11:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:35.447 Initializing NVMe Controllers 00:30:35.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:35.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:35.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:35.447 Initialization complete. Launching workers. 00:30:35.447 ======================================================== 00:30:35.447 Latency(us) 00:30:35.447 Device Information : IOPS MiB/s Average min max 00:30:35.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11312.99 44.19 2831.93 494.47 10095.15 00:30:35.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3790.00 14.80 8478.01 5782.41 19758.73 00:30:35.447 ======================================================== 00:30:35.447 Total : 15102.98 59.00 4248.78 494.47 19758.73 00:30:35.447 00:30:35.447 03:11:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:35.447 03:11:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:35.447 03:11:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:37.981 Initializing NVMe Controllers 00:30:37.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:37.981 Controller IO queue size 128, less than required. 00:30:37.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:37.981 Controller IO queue size 128, less than required. 00:30:37.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:37.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:37.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:37.981 Initialization complete. Launching workers. 00:30:37.981 ======================================================== 00:30:37.981 Latency(us) 00:30:37.981 Device Information : IOPS MiB/s Average min max 00:30:37.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1833.67 458.42 70749.36 47791.52 103936.38 00:30:37.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 628.02 157.00 215292.14 80303.84 318045.29 00:30:37.981 ======================================================== 00:30:37.981 Total : 2461.68 615.42 107624.61 47791.52 318045.29 00:30:37.981 00:30:37.981 03:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:38.240 No valid NVMe controllers or AIO or URING devices found 00:30:38.240 Initializing NVMe Controllers 00:30:38.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:38.240 Controller IO queue size 128, less than required. 00:30:38.240 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:38.240 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:38.240 Controller IO queue size 128, less than required. 00:30:38.240 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:38.240 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:38.240 WARNING: Some requested NVMe devices were skipped 00:30:38.240 03:11:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:41.530 Initializing NVMe Controllers 00:30:41.530 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:41.530 Controller IO queue size 128, less than required. 00:30:41.530 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:41.530 Controller IO queue size 128, less than required. 00:30:41.530 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:41.530 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:41.530 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:41.530 Initialization complete. Launching workers. 00:30:41.530 00:30:41.530 ==================== 00:30:41.530 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:41.530 TCP transport: 00:30:41.530 polls: 15047 00:30:41.530 idle_polls: 11510 00:30:41.530 sock_completions: 3537 00:30:41.530 nvme_completions: 6641 00:30:41.530 submitted_requests: 9982 00:30:41.530 queued_requests: 1 00:30:41.530 00:30:41.530 ==================== 00:30:41.530 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:41.530 TCP transport: 00:30:41.530 polls: 11100 00:30:41.530 idle_polls: 7570 00:30:41.530 sock_completions: 3530 00:30:41.530 nvme_completions: 6731 00:30:41.530 submitted_requests: 10094 00:30:41.530 queued_requests: 1 00:30:41.530 ======================================================== 00:30:41.530 Latency(us) 00:30:41.530 Device Information : IOPS MiB/s Average min max 00:30:41.530 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1659.93 414.98 79271.69 47706.94 125634.45 00:30:41.530 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1682.43 420.61 76581.99 46848.62 112142.12 00:30:41.530 ======================================================== 00:30:41.530 Total : 3342.35 835.59 77917.79 46848.62 125634.45 00:30:41.530 00:30:41.530 03:11:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:41.530 03:11:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:41.530 03:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:41.530 03:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:30:41.530 03:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:44.815 03:11:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=3f7e386f-0275-40eb-a459-e171fbbe1069 00:30:44.815 03:11:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 3f7e386f-0275-40eb-a459-e171fbbe1069 00:30:44.815 03:11:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=3f7e386f-0275-40eb-a459-e171fbbe1069 00:30:44.815 03:11:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:44.815 03:11:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:44.815 03:11:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:44.815 03:11:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:44.815 03:11:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:44.815 { 00:30:44.815 "uuid": "3f7e386f-0275-40eb-a459-e171fbbe1069", 00:30:44.815 "name": "lvs_0", 00:30:44.815 "base_bdev": "Nvme0n1", 00:30:44.815 "total_data_clusters": 238234, 00:30:44.815 "free_clusters": 238234, 00:30:44.815 "block_size": 512, 00:30:44.815 "cluster_size": 4194304 00:30:44.815 } 00:30:44.815 ]' 00:30:44.815 03:11:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="3f7e386f-0275-40eb-a459-e171fbbe1069") .free_clusters' 00:30:44.815 03:11:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:30:44.815 03:11:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="3f7e386f-0275-40eb-a459-e171fbbe1069") .cluster_size' 00:30:44.815 03:11:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:44.815 03:11:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:30:44.815 03:11:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:30:44.815 952936 00:30:44.815 03:11:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:44.815 03:11:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:44.815 03:11:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3f7e386f-0275-40eb-a459-e171fbbe1069 lbd_0 20480 00:30:45.074 03:12:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=f7e5cbc3-c7c1-45c8-9e09-2bd8743cc601 00:30:45.074 03:12:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore f7e5cbc3-c7c1-45c8-9e09-2bd8743cc601 lvs_n_0 00:30:45.641 03:12:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=9bdbf3e0-09ae-4306-9de0-6a952b38cb82 00:30:45.641 03:12:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 9bdbf3e0-09ae-4306-9de0-6a952b38cb82 00:30:45.641 03:12:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=9bdbf3e0-09ae-4306-9de0-6a952b38cb82 00:30:45.641 03:12:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:45.641 03:12:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:45.641 03:12:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:45.641 03:12:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:45.900 03:12:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:45.900 { 00:30:45.900 "uuid": "3f7e386f-0275-40eb-a459-e171fbbe1069", 00:30:45.900 "name": "lvs_0", 00:30:45.900 "base_bdev": "Nvme0n1", 00:30:45.900 "total_data_clusters": 238234, 00:30:45.900 "free_clusters": 233114, 00:30:45.900 "block_size": 512, 00:30:45.900 "cluster_size": 4194304 00:30:45.900 }, 00:30:45.900 { 00:30:45.900 "uuid": "9bdbf3e0-09ae-4306-9de0-6a952b38cb82", 00:30:45.900 "name": "lvs_n_0", 00:30:45.900 "base_bdev": "f7e5cbc3-c7c1-45c8-9e09-2bd8743cc601", 00:30:45.900 "total_data_clusters": 5114, 00:30:45.900 "free_clusters": 5114, 00:30:45.900 "block_size": 512, 00:30:45.900 "cluster_size": 4194304 00:30:45.900 } 00:30:45.900 ]' 00:30:45.900 03:12:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="9bdbf3e0-09ae-4306-9de0-6a952b38cb82") .free_clusters' 00:30:45.900 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:30:45.900 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="9bdbf3e0-09ae-4306-9de0-6a952b38cb82") .cluster_size' 00:30:46.158 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:46.158 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:30:46.158 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:30:46.159 20456 00:30:46.159 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:46.159 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9bdbf3e0-09ae-4306-9de0-6a952b38cb82 lbd_nest_0 20456 00:30:46.159 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=f105526b-60c2-4e39-a250-09273bd11e9e 00:30:46.159 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:46.417 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:46.417 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 f105526b-60c2-4e39-a250-09273bd11e9e 00:30:46.675 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:46.934 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:46.934 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:46.934 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:46.934 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:46.934 03:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:59.136 Initializing NVMe Controllers 00:30:59.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:59.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:59.136 Initialization complete. Launching workers. 00:30:59.136 ======================================================== 00:30:59.136 Latency(us) 00:30:59.136 Device Information : IOPS MiB/s Average min max 00:30:59.136 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.30 0.02 21209.00 123.18 44820.04 00:30:59.136 ======================================================== 00:30:59.136 Total : 47.30 0.02 21209.00 123.18 44820.04 00:30:59.136 00:30:59.136 03:12:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:59.136 03:12:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:09.114 Initializing NVMe Controllers 00:31:09.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:09.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:09.114 Initialization complete. Launching workers. 00:31:09.114 ======================================================== 00:31:09.114 Latency(us) 00:31:09.114 Device Information : IOPS MiB/s Average min max 00:31:09.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 66.30 8.29 15088.92 5142.76 47887.73 00:31:09.114 ======================================================== 00:31:09.114 Total : 66.30 8.29 15088.92 5142.76 47887.73 00:31:09.114 00:31:09.114 03:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:09.114 03:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:09.114 03:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:19.092 Initializing NVMe Controllers 00:31:19.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:19.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:19.092 Initialization complete. Launching workers. 00:31:19.092 ======================================================== 00:31:19.092 Latency(us) 00:31:19.093 Device Information : IOPS MiB/s Average min max 00:31:19.093 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8567.68 4.18 3734.43 251.64 9198.16 00:31:19.093 ======================================================== 00:31:19.093 Total : 8567.68 4.18 3734.43 251.64 9198.16 00:31:19.093 00:31:19.093 03:12:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:19.093 03:12:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:29.085 Initializing NVMe Controllers 00:31:29.085 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:29.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:29.085 Initialization complete. Launching workers. 00:31:29.085 ======================================================== 00:31:29.085 Latency(us) 00:31:29.085 Device Information : IOPS MiB/s Average min max 00:31:29.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4480.69 560.09 7141.42 648.17 15793.82 00:31:29.085 ======================================================== 00:31:29.085 Total : 4480.69 560.09 7141.42 648.17 15793.82 00:31:29.085 00:31:29.085 03:12:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:29.085 03:12:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:29.085 03:12:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:39.066 Initializing NVMe Controllers 00:31:39.066 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:39.066 Controller IO queue size 128, less than required. 00:31:39.066 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:39.066 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:39.066 Initialization complete. Launching workers. 00:31:39.066 ======================================================== 00:31:39.066 Latency(us) 00:31:39.066 Device Information : IOPS MiB/s Average min max 00:31:39.066 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15834.44 7.73 8085.81 1349.11 22842.32 00:31:39.066 ======================================================== 00:31:39.066 Total : 15834.44 7.73 8085.81 1349.11 22842.32 00:31:39.066 00:31:39.066 03:12:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:39.066 03:12:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:49.042 Initializing NVMe Controllers 00:31:49.042 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:49.042 Controller IO queue size 128, less than required. 00:31:49.042 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:49.042 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:49.042 Initialization complete. Launching workers. 00:31:49.042 ======================================================== 00:31:49.042 Latency(us) 00:31:49.042 Device Information : IOPS MiB/s Average min max 00:31:49.042 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1173.58 146.70 109220.35 23624.13 236705.95 00:31:49.042 ======================================================== 00:31:49.042 Total : 1173.58 146.70 109220.35 23624.13 236705.95 00:31:49.042 00:31:49.042 03:13:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:49.042 03:13:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f105526b-60c2-4e39-a250-09273bd11e9e 00:31:49.609 03:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:49.867 03:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f7e5cbc3-c7c1-45c8-9e09-2bd8743cc601 00:31:50.126 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:50.126 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:50.126 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:50.126 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:50.126 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:50.126 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:50.126 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:50.126 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:50.126 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:50.126 rmmod nvme_tcp 00:31:50.384 rmmod nvme_fabrics 00:31:50.384 rmmod nvme_keyring 00:31:50.384 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:50.384 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:50.384 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:50.384 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 358080 ']' 00:31:50.384 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 358080 00:31:50.384 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 358080 ']' 00:31:50.384 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 358080 00:31:50.384 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:31:50.384 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:50.384 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 358080 00:31:50.384 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:50.384 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:50.384 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 358080' 00:31:50.384 killing process with pid 358080 00:31:50.384 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 358080 00:31:50.384 03:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 358080 00:31:51.762 03:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:51.762 03:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:51.762 03:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:51.762 03:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:51.762 03:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:51.762 03:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:31:51.762 03:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:31:51.762 03:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:51.762 03:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:51.762 03:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.762 03:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:51.762 03:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.299 03:13:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:54.299 00:31:54.299 real 1m33.559s 00:31:54.299 user 5m33.762s 00:31:54.299 sys 0m17.004s 00:31:54.299 03:13:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:54.299 03:13:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:54.299 ************************************ 00:31:54.299 END TEST nvmf_perf 00:31:54.299 ************************************ 00:31:54.299 03:13:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:54.299 03:13:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:54.299 03:13:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:54.299 03:13:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.299 ************************************ 00:31:54.299 START TEST nvmf_fio_host 00:31:54.299 ************************************ 00:31:54.299 03:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:54.299 * Looking for test storage... 00:31:54.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:54.299 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:54.299 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:31:54.299 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:54.299 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:54.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.300 --rc genhtml_branch_coverage=1 00:31:54.300 --rc genhtml_function_coverage=1 00:31:54.300 --rc genhtml_legend=1 00:31:54.300 --rc geninfo_all_blocks=1 00:31:54.300 --rc geninfo_unexecuted_blocks=1 00:31:54.300 00:31:54.300 ' 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:54.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.300 --rc genhtml_branch_coverage=1 00:31:54.300 --rc genhtml_function_coverage=1 00:31:54.300 --rc genhtml_legend=1 00:31:54.300 --rc geninfo_all_blocks=1 00:31:54.300 --rc geninfo_unexecuted_blocks=1 00:31:54.300 00:31:54.300 ' 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:54.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.300 --rc genhtml_branch_coverage=1 00:31:54.300 --rc genhtml_function_coverage=1 00:31:54.300 --rc genhtml_legend=1 00:31:54.300 --rc geninfo_all_blocks=1 00:31:54.300 --rc geninfo_unexecuted_blocks=1 00:31:54.300 00:31:54.300 ' 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:54.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.300 --rc genhtml_branch_coverage=1 00:31:54.300 --rc genhtml_function_coverage=1 00:31:54.300 --rc genhtml_legend=1 00:31:54.300 --rc geninfo_all_blocks=1 00:31:54.300 --rc geninfo_unexecuted_blocks=1 00:31:54.300 00:31:54.300 ' 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.300 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:54.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:54.301 03:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:00.869 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:00.869 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:00.869 Found net devices under 0000:af:00.0: cvl_0_0 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:00.869 Found net devices under 0000:af:00.1: cvl_0_1 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:00.869 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:00.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:00.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:32:00.870 00:32:00.870 --- 10.0.0.2 ping statistics --- 00:32:00.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.870 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:00.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:00.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:32:00.870 00:32:00.870 --- 10.0.0.1 ping statistics --- 00:32:00.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.870 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:00.870 03:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:00.870 03:13:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:00.870 03:13:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:00.870 03:13:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:00.870 03:13:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.870 03:13:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=361946 00:32:00.870 03:13:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:00.870 03:13:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:00.870 03:13:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 361946 00:32:00.870 03:13:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 361946 ']' 00:32:00.870 03:13:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.870 03:13:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.870 03:13:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.870 03:13:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.870 03:13:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.870 [2024-12-14 03:13:15.078748] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:00.870 [2024-12-14 03:13:15.078790] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.870 [2024-12-14 03:13:15.156219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:00.870 [2024-12-14 03:13:15.178729] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.870 [2024-12-14 03:13:15.178765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.870 [2024-12-14 03:13:15.178771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.870 [2024-12-14 03:13:15.178778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.870 [2024-12-14 03:13:15.178783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.870 [2024-12-14 03:13:15.180178] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.870 [2024-12-14 03:13:15.180291] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:32:00.870 [2024-12-14 03:13:15.180374] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.870 [2024-12-14 03:13:15.180375] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:32:00.870 03:13:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:00.870 03:13:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:32:00.870 03:13:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:00.870 [2024-12-14 03:13:15.437255] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.870 03:13:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:00.870 03:13:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:00.870 03:13:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.870 03:13:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:00.870 Malloc1 00:32:00.870 03:13:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:00.870 03:13:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:01.129 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:01.129 [2024-12-14 03:13:16.260352] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:01.388 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:01.388 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:01.388 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:01.388 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:01.388 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:01.388 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:01.388 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:01.388 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:01.388 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:01.388 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:01.388 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:01.388 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:01.388 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:01.388 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:01.388 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:01.388 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:01.388 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:01.388 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:01.388 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:01.388 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:01.655 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:01.655 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:01.655 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:01.655 03:13:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:01.911 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:01.911 fio-3.35 00:32:01.911 Starting 1 thread 00:32:04.430 00:32:04.430 test: (groupid=0, jobs=1): err= 0: pid=362112: Sat Dec 14 03:13:19 2024 00:32:04.430 read: IOPS=12.0k, BW=47.0MiB/s (49.2MB/s)(94.2MiB/2005msec) 00:32:04.430 slat (nsec): min=1516, max=236853, avg=1633.71, stdev=2151.59 00:32:04.430 clat (usec): min=3056, max=10361, avg=5891.58, stdev=445.34 00:32:04.430 lat (usec): min=3092, max=10363, avg=5893.21, stdev=445.27 00:32:04.430 clat percentiles (usec): 00:32:04.430 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5538], 00:32:04.430 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 5997], 00:32:04.430 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6390], 95.00th=[ 6587], 00:32:04.430 | 99.00th=[ 6849], 99.50th=[ 6915], 99.90th=[ 8455], 99.95th=[ 9503], 00:32:04.430 | 99.99th=[10028] 00:32:04.430 bw ( KiB/s): min=47328, max=48608, per=99.94%, avg=48066.00, stdev=621.48, samples=4 00:32:04.430 iops : min=11832, max=12152, avg=12016.50, stdev=155.37, samples=4 00:32:04.430 write: IOPS=12.0k, BW=46.8MiB/s (49.0MB/s)(93.8MiB/2005msec); 0 zone resets 00:32:04.430 slat (nsec): min=1551, max=227692, avg=1705.08, stdev=1620.70 00:32:04.430 clat (usec): min=2407, max=9407, avg=4743.67, stdev=366.37 00:32:04.430 lat (usec): min=2422, max=9408, avg=4745.38, stdev=366.43 00:32:04.430 clat percentiles (usec): 00:32:04.430 | 1.00th=[ 3916], 5.00th=[ 4178], 10.00th=[ 4293], 20.00th=[ 4490], 00:32:04.430 | 30.00th=[ 4555], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4817], 00:32:04.430 | 70.00th=[ 4948], 80.00th=[ 5014], 90.00th=[ 5145], 95.00th=[ 5276], 00:32:04.430 | 99.00th=[ 5538], 99.50th=[ 5669], 99.90th=[ 7242], 99.95th=[ 8356], 00:32:04.430 | 99.99th=[ 9372] 00:32:04.430 bw ( KiB/s): min=47464, max=48384, per=100.00%, avg=47898.00, stdev=391.01, samples=4 00:32:04.430 iops : min=11866, max=12096, avg=11974.50, stdev=97.75, samples=4 00:32:04.430 lat (msec) : 4=0.92%, 10=99.07%, 20=0.01% 00:32:04.430 cpu : usr=71.21%, sys=27.99%, ctx=79, majf=0, minf=3 00:32:04.430 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:04.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:04.430 issued rwts: total=24107,24002,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.430 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:04.430 00:32:04.430 Run status group 0 (all jobs): 00:32:04.430 READ: bw=47.0MiB/s (49.2MB/s), 47.0MiB/s-47.0MiB/s (49.2MB/s-49.2MB/s), io=94.2MiB (98.7MB), run=2005-2005msec 00:32:04.430 WRITE: bw=46.8MiB/s (49.0MB/s), 46.8MiB/s-46.8MiB/s (49.0MB/s-49.0MB/s), io=93.8MiB (98.3MB), run=2005-2005msec 00:32:04.430 03:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:04.430 03:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:04.430 03:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:04.430 03:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:04.430 03:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:04.430 03:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:04.430 03:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:04.430 03:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:04.430 03:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:04.430 03:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:04.430 03:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:04.430 03:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:04.430 03:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:04.430 03:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:04.430 03:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:04.430 03:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:04.430 03:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:04.430 03:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:04.430 03:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:04.430 03:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:04.431 03:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:04.431 03:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:04.431 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:04.431 fio-3.35 00:32:04.431 Starting 1 thread 00:32:06.324 [2024-12-14 03:13:21.307620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2300 is same with the state(6) to be set 00:32:06.324 [2024-12-14 03:13:21.307677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2300 is same with the state(6) to be set 00:32:06.888 00:32:06.888 test: (groupid=0, jobs=1): err= 0: pid=362259: Sat Dec 14 03:13:21 2024 00:32:06.888 read: IOPS=11.1k, BW=173MiB/s (181MB/s)(347MiB/2003msec) 00:32:06.888 slat (nsec): min=2470, max=81723, avg=2722.90, stdev=1241.04 00:32:06.888 clat (usec): min=1911, max=15528, avg=6580.39, stdev=1510.77 00:32:06.888 lat (usec): min=1913, max=15531, avg=6583.11, stdev=1510.90 00:32:06.888 clat percentiles (usec): 00:32:06.888 | 1.00th=[ 3490], 5.00th=[ 4228], 10.00th=[ 4752], 20.00th=[ 5276], 00:32:06.888 | 30.00th=[ 5735], 40.00th=[ 6128], 50.00th=[ 6521], 60.00th=[ 7046], 00:32:06.888 | 70.00th=[ 7373], 80.00th=[ 7635], 90.00th=[ 8455], 95.00th=[ 9110], 00:32:06.888 | 99.00th=[10814], 99.50th=[11338], 99.90th=[12649], 99.95th=[12911], 00:32:06.888 | 99.99th=[13435] 00:32:06.888 bw ( KiB/s): min=82592, max=96288, per=51.07%, avg=90496.00, stdev=6335.14, samples=4 00:32:06.888 iops : min= 5162, max= 6018, avg=5656.00, stdev=395.95, samples=4 00:32:06.888 write: IOPS=6665, BW=104MiB/s (109MB/s)(185MiB/1780msec); 0 zone resets 00:32:06.888 slat (usec): min=28, max=389, avg=30.79, stdev= 7.12 00:32:06.888 clat (usec): min=2521, max=15219, avg=8533.09, stdev=1487.12 00:32:06.888 lat (usec): min=2551, max=15331, avg=8563.88, stdev=1488.60 00:32:06.888 clat percentiles (usec): 00:32:06.888 | 1.00th=[ 5669], 5.00th=[ 6456], 10.00th=[ 6849], 20.00th=[ 7308], 00:32:06.888 | 30.00th=[ 7635], 40.00th=[ 7963], 50.00th=[ 8356], 60.00th=[ 8717], 00:32:06.888 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11207], 00:32:06.888 | 99.00th=[12780], 99.50th=[13566], 99.90th=[14615], 99.95th=[15008], 00:32:06.888 | 99.99th=[15139] 00:32:06.888 bw ( KiB/s): min=86592, max=100032, per=88.56%, avg=94448.00, stdev=6173.95, samples=4 00:32:06.888 iops : min= 5412, max= 6252, avg=5903.00, stdev=385.87, samples=4 00:32:06.888 lat (msec) : 2=0.01%, 4=2.20%, 10=90.51%, 20=7.28% 00:32:06.888 cpu : usr=87.06%, sys=12.19%, ctx=58, majf=0, minf=3 00:32:06.888 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:32:06.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:06.888 issued rwts: total=22182,11865,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.888 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:06.888 00:32:06.888 Run status group 0 (all jobs): 00:32:06.888 READ: bw=173MiB/s (181MB/s), 173MiB/s-173MiB/s (181MB/s-181MB/s), io=347MiB (363MB), run=2003-2003msec 00:32:06.888 WRITE: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=185MiB (194MB), run=1780-1780msec 00:32:06.888 03:13:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:06.888 03:13:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:06.888 03:13:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:06.888 03:13:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:06.888 03:13:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:06.888 03:13:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:32:06.888 03:13:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:06.888 03:13:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:06.888 03:13:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:07.145 03:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:07.145 03:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:32:07.145 03:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:32:10.416 Nvme0n1 00:32:10.416 03:13:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:12.936 03:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=063bbf57-33f6-4706-a456-6b7f275d504d 00:32:12.936 03:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 063bbf57-33f6-4706-a456-6b7f275d504d 00:32:12.936 03:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=063bbf57-33f6-4706-a456-6b7f275d504d 00:32:12.936 03:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:12.936 03:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:12.936 03:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:12.936 03:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:13.193 03:13:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:13.193 { 00:32:13.193 "uuid": "063bbf57-33f6-4706-a456-6b7f275d504d", 00:32:13.193 "name": "lvs_0", 00:32:13.193 "base_bdev": "Nvme0n1", 00:32:13.193 "total_data_clusters": 930, 00:32:13.193 "free_clusters": 930, 00:32:13.193 "block_size": 512, 00:32:13.193 "cluster_size": 1073741824 00:32:13.193 } 00:32:13.193 ]' 00:32:13.193 03:13:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="063bbf57-33f6-4706-a456-6b7f275d504d") .free_clusters' 00:32:13.193 03:13:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:32:13.193 03:13:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="063bbf57-33f6-4706-a456-6b7f275d504d") .cluster_size' 00:32:13.193 03:13:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:32:13.193 03:13:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:32:13.193 03:13:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:32:13.193 952320 00:32:13.193 03:13:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:13.449 ecdfc98b-a30b-4f5c-abc5-8f95d2a3a6d0 00:32:13.705 03:13:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:13.706 03:13:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:13.962 03:13:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:14.218 03:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:14.218 03:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:14.218 03:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:14.218 03:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:14.218 03:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:14.218 03:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:14.218 03:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:14.218 03:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:14.218 03:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:14.218 03:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:14.218 03:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:14.218 03:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:14.218 03:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:14.218 03:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:14.218 03:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:14.218 03:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:14.218 03:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:14.218 03:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:14.218 03:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:14.218 03:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:14.218 03:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:14.218 03:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:14.475 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:14.475 fio-3.35 00:32:14.475 Starting 1 thread 00:32:16.993 [2024-12-14 03:13:31.798964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254f030 is same with the state(6) to be set 00:32:16.993 00:32:16.993 test: (groupid=0, jobs=1): err= 0: pid=362533: Sat Dec 14 03:13:31 2024 00:32:16.993 read: IOPS=8158, BW=31.9MiB/s (33.4MB/s)(63.9MiB/2006msec) 00:32:16.993 slat (nsec): min=1480, max=91211, avg=1629.74, stdev=1092.99 00:32:16.993 clat (usec): min=885, max=169902, avg=8643.24, stdev=10212.06 00:32:16.993 lat (usec): min=887, max=169923, avg=8644.87, stdev=10212.21 00:32:16.993 clat percentiles (msec): 00:32:16.993 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:32:16.993 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:32:16.993 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 9], 00:32:16.993 | 99.00th=[ 10], 99.50th=[ 12], 99.90th=[ 169], 99.95th=[ 169], 00:32:16.993 | 99.99th=[ 171] 00:32:16.993 bw ( KiB/s): min=23032, max=35928, per=99.85%, avg=32584.00, stdev=6369.76, samples=4 00:32:16.994 iops : min= 5758, max= 8982, avg=8146.00, stdev=1592.44, samples=4 00:32:16.994 write: IOPS=8153, BW=31.8MiB/s (33.4MB/s)(63.9MiB/2006msec); 0 zone resets 00:32:16.994 slat (nsec): min=1512, max=81118, avg=1702.24, stdev=817.13 00:32:16.994 clat (usec): min=176, max=168523, avg=6946.49, stdev=9543.52 00:32:16.994 lat (usec): min=178, max=168527, avg=6948.19, stdev=9543.68 00:32:16.994 clat percentiles (msec): 00:32:16.994 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:32:16.994 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:32:16.994 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 8], 95.00th=[ 8], 00:32:16.994 | 99.00th=[ 8], 99.50th=[ 10], 99.90th=[ 169], 99.95th=[ 169], 00:32:16.994 | 99.99th=[ 169] 00:32:16.994 bw ( KiB/s): min=23976, max=35536, per=99.97%, avg=32606.00, stdev=5753.69, samples=4 00:32:16.994 iops : min= 5994, max= 8884, avg=8151.50, stdev=1438.42, samples=4 00:32:16.994 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:32:16.994 lat (msec) : 2=0.05%, 4=0.23%, 10=99.16%, 20=0.14%, 250=0.39% 00:32:16.994 cpu : usr=70.67%, sys=27.83%, ctx=301, majf=0, minf=3 00:32:16.994 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:16.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:16.994 issued rwts: total=16366,16356,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.994 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:16.994 00:32:16.994 Run status group 0 (all jobs): 00:32:16.994 READ: bw=31.9MiB/s (33.4MB/s), 31.9MiB/s-31.9MiB/s (33.4MB/s-33.4MB/s), io=63.9MiB (67.0MB), run=2006-2006msec 00:32:16.994 WRITE: bw=31.8MiB/s (33.4MB/s), 31.8MiB/s-31.8MiB/s (33.4MB/s-33.4MB/s), io=63.9MiB (67.0MB), run=2006-2006msec 00:32:16.994 03:13:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:16.994 03:13:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:18.361 03:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=c7a77f97-9613-4812-8603-adf568454f96 00:32:18.361 03:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb c7a77f97-9613-4812-8603-adf568454f96 00:32:18.361 03:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=c7a77f97-9613-4812-8603-adf568454f96 00:32:18.361 03:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:18.361 03:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:18.361 03:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:18.361 03:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:18.361 03:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:18.361 { 00:32:18.361 "uuid": "063bbf57-33f6-4706-a456-6b7f275d504d", 00:32:18.361 "name": "lvs_0", 00:32:18.361 "base_bdev": "Nvme0n1", 00:32:18.361 "total_data_clusters": 930, 00:32:18.361 "free_clusters": 0, 00:32:18.361 "block_size": 512, 00:32:18.361 "cluster_size": 1073741824 00:32:18.361 }, 00:32:18.361 { 00:32:18.361 "uuid": "c7a77f97-9613-4812-8603-adf568454f96", 00:32:18.361 "name": "lvs_n_0", 00:32:18.361 "base_bdev": "ecdfc98b-a30b-4f5c-abc5-8f95d2a3a6d0", 00:32:18.361 "total_data_clusters": 237847, 00:32:18.361 "free_clusters": 237847, 00:32:18.361 "block_size": 512, 00:32:18.361 "cluster_size": 4194304 00:32:18.361 } 00:32:18.361 ]' 00:32:18.361 03:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="c7a77f97-9613-4812-8603-adf568454f96") .free_clusters' 00:32:18.361 03:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:32:18.362 03:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="c7a77f97-9613-4812-8603-adf568454f96") .cluster_size' 00:32:18.362 03:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:32:18.362 03:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:32:18.362 03:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:32:18.362 951388 00:32:18.362 03:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:18.924 9528aec6-821e-4215-be13-cf195fefd6ec 00:32:18.924 03:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:19.181 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:19.437 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:19.437 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:19.437 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:19.437 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:19.437 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:19.437 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:19.437 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:19.437 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:19.437 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:19.437 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:19.437 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:19.437 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:19.437 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:19.437 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:19.437 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:19.437 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:19.437 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:19.437 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:19.437 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:19.694 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:19.694 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:19.694 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:19.694 03:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:19.951 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:19.951 fio-3.35 00:32:19.951 Starting 1 thread 00:32:22.475 00:32:22.475 test: (groupid=0, jobs=1): err= 0: pid=362720: Sat Dec 14 03:13:37 2024 00:32:22.475 read: IOPS=7868, BW=30.7MiB/s (32.2MB/s)(61.7MiB/2007msec) 00:32:22.475 slat (nsec): min=1525, max=102194, avg=1610.96, stdev=1083.91 00:32:22.475 clat (usec): min=3095, max=14120, avg=8896.37, stdev=797.63 00:32:22.475 lat (usec): min=3098, max=14121, avg=8897.98, stdev=797.57 00:32:22.475 clat percentiles (usec): 00:32:22.475 | 1.00th=[ 7046], 5.00th=[ 7635], 10.00th=[ 7898], 20.00th=[ 8225], 00:32:22.475 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 9110], 00:32:22.475 | 70.00th=[ 9372], 80.00th=[ 9503], 90.00th=[ 9896], 95.00th=[10159], 00:32:22.475 | 99.00th=[10683], 99.50th=[10814], 99.90th=[12649], 99.95th=[12911], 00:32:22.475 | 99.99th=[13960] 00:32:22.475 bw ( KiB/s): min=30112, max=32048, per=99.92%, avg=31450.00, stdev=900.17, samples=4 00:32:22.475 iops : min= 7528, max= 8012, avg=7862.50, stdev=225.04, samples=4 00:32:22.475 write: IOPS=7843, BW=30.6MiB/s (32.1MB/s)(61.5MiB/2007msec); 0 zone resets 00:32:22.475 slat (nsec): min=1556, max=78419, avg=1676.71, stdev=696.45 00:32:22.475 clat (usec): min=1457, max=12997, avg=7278.14, stdev=650.15 00:32:22.475 lat (usec): min=1461, max=12999, avg=7279.82, stdev=650.13 00:32:22.475 clat percentiles (usec): 00:32:22.475 | 1.00th=[ 5735], 5.00th=[ 6325], 10.00th=[ 6521], 20.00th=[ 6783], 00:32:22.475 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7308], 60.00th=[ 7439], 00:32:22.475 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 8029], 95.00th=[ 8291], 00:32:22.475 | 99.00th=[ 8717], 99.50th=[ 8848], 99.90th=[11731], 99.95th=[11863], 00:32:22.475 | 99.99th=[13042] 00:32:22.475 bw ( KiB/s): min=31192, max=31480, per=99.97%, avg=31366.00, stdev=124.69, samples=4 00:32:22.475 iops : min= 7798, max= 7870, avg=7841.50, stdev=31.17, samples=4 00:32:22.475 lat (msec) : 2=0.01%, 4=0.11%, 10=95.98%, 20=3.90% 00:32:22.475 cpu : usr=71.68%, sys=27.52%, ctx=130, majf=0, minf=3 00:32:22.475 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:22.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:22.475 issued rwts: total=15793,15742,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:22.475 00:32:22.475 Run status group 0 (all jobs): 00:32:22.475 READ: bw=30.7MiB/s (32.2MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=61.7MiB (64.7MB), run=2007-2007msec 00:32:22.476 WRITE: bw=30.6MiB/s (32.1MB/s), 30.6MiB/s-30.6MiB/s (32.1MB/s-32.1MB/s), io=61.5MiB (64.5MB), run=2007-2007msec 00:32:22.476 03:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:22.476 03:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:22.476 03:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:26.652 03:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:26.652 03:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:29.172 03:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:29.429 03:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:31.324 rmmod nvme_tcp 00:32:31.324 rmmod nvme_fabrics 00:32:31.324 rmmod nvme_keyring 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 361946 ']' 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 361946 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 361946 ']' 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 361946 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 361946 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 361946' 00:32:31.324 killing process with pid 361946 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 361946 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 361946 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:31.324 03:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.859 03:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:33.859 00:32:33.859 real 0m39.530s 00:32:33.859 user 2m37.749s 00:32:33.859 sys 0m8.745s 00:32:33.859 03:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:33.859 03:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.859 ************************************ 00:32:33.859 END TEST nvmf_fio_host 00:32:33.859 ************************************ 00:32:33.859 03:13:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:33.859 03:13:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:33.859 03:13:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:33.859 03:13:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.859 ************************************ 00:32:33.859 START TEST nvmf_failover 00:32:33.859 ************************************ 00:32:33.859 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:33.859 * Looking for test storage... 00:32:33.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:33.859 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:33.859 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:32:33.859 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:33.859 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:33.859 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:33.859 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:33.859 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:33.859 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:32:33.859 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:32:33.859 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:32:33.859 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:32:33.859 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:32:33.859 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:33.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.860 --rc genhtml_branch_coverage=1 00:32:33.860 --rc genhtml_function_coverage=1 00:32:33.860 --rc genhtml_legend=1 00:32:33.860 --rc geninfo_all_blocks=1 00:32:33.860 --rc geninfo_unexecuted_blocks=1 00:32:33.860 00:32:33.860 ' 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:33.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.860 --rc genhtml_branch_coverage=1 00:32:33.860 --rc genhtml_function_coverage=1 00:32:33.860 --rc genhtml_legend=1 00:32:33.860 --rc geninfo_all_blocks=1 00:32:33.860 --rc geninfo_unexecuted_blocks=1 00:32:33.860 00:32:33.860 ' 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:33.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.860 --rc genhtml_branch_coverage=1 00:32:33.860 --rc genhtml_function_coverage=1 00:32:33.860 --rc genhtml_legend=1 00:32:33.860 --rc geninfo_all_blocks=1 00:32:33.860 --rc geninfo_unexecuted_blocks=1 00:32:33.860 00:32:33.860 ' 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:33.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.860 --rc genhtml_branch_coverage=1 00:32:33.860 --rc genhtml_function_coverage=1 00:32:33.860 --rc genhtml_legend=1 00:32:33.860 --rc geninfo_all_blocks=1 00:32:33.860 --rc geninfo_unexecuted_blocks=1 00:32:33.860 00:32:33.860 ' 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:33.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:32:33.860 03:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:40.429 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:40.429 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:40.429 Found net devices under 0000:af:00.0: cvl_0_0 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:40.429 Found net devices under 0000:af:00.1: cvl_0_1 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:40.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:40.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:32:40.429 00:32:40.429 --- 10.0.0.2 ping statistics --- 00:32:40.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.429 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:40.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:40.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:32:40.429 00:32:40.429 --- 10.0.0.1 ping statistics --- 00:32:40.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.429 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=365087 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 365087 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 365087 ']' 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:40.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:40.429 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:40.429 [2024-12-14 03:13:54.629950] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:40.430 [2024-12-14 03:13:54.629991] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:40.430 [2024-12-14 03:13:54.708956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:40.430 [2024-12-14 03:13:54.730653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:40.430 [2024-12-14 03:13:54.730689] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:40.430 [2024-12-14 03:13:54.730696] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:40.430 [2024-12-14 03:13:54.730701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:40.430 [2024-12-14 03:13:54.730706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:40.430 [2024-12-14 03:13:54.732009] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:32:40.430 [2024-12-14 03:13:54.732114] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:40.430 [2024-12-14 03:13:54.732115] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:32:40.430 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:40.430 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:40.430 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:40.430 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:40.430 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:40.430 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:40.430 03:13:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:40.430 [2024-12-14 03:13:55.022955] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:40.430 03:13:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:40.430 Malloc0 00:32:40.430 03:13:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:40.430 03:13:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:40.687 03:13:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:40.687 [2024-12-14 03:13:55.811012] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:40.945 03:13:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:40.945 [2024-12-14 03:13:56.007566] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:40.945 03:13:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:41.202 [2024-12-14 03:13:56.196189] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:41.202 03:13:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:41.202 03:13:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=365137 00:32:41.202 03:13:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:41.202 03:13:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 365137 /var/tmp/bdevperf.sock 00:32:41.202 03:13:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 365137 ']' 00:32:41.202 03:13:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:41.202 03:13:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:41.202 03:13:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:41.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:41.202 03:13:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:41.202 03:13:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:41.459 03:13:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:41.459 03:13:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:41.459 03:13:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:41.716 NVMe0n1 00:32:41.973 03:13:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:42.230 00:32:42.230 03:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:42.230 03:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=365152 00:32:42.230 03:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:43.161 03:13:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:43.419 [2024-12-14 03:13:58.339199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 [2024-12-14 03:13:58.339391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240fac0 is same with the state(6) to be set 00:32:43.419 03:13:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:46.694 03:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:46.694 00:32:46.694 03:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:46.951 [2024-12-14 03:14:01.887367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24108e0 is same with the state(6) to be set 00:32:46.951 [2024-12-14 03:14:01.887412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24108e0 is same with the state(6) to be set 00:32:46.951 [2024-12-14 03:14:01.887420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24108e0 is same with the state(6) to be set 00:32:46.951 [2024-12-14 03:14:01.887426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24108e0 is same with the state(6) to be set 00:32:46.951 [2024-12-14 03:14:01.887433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24108e0 is same with the state(6) to be set 00:32:46.951 [2024-12-14 03:14:01.887439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24108e0 is same with the state(6) to be set 00:32:46.951 [2024-12-14 03:14:01.887445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24108e0 is same with the state(6) to be set 00:32:46.951 [2024-12-14 03:14:01.887451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24108e0 is same with the state(6) to be set 00:32:46.951 03:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:50.224 03:14:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:50.224 [2024-12-14 03:14:05.100146] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:50.224 03:14:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:51.153 03:14:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:51.410 [2024-12-14 03:14:06.315703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 [2024-12-14 03:14:06.315879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411690 is same with the state(6) to be set 00:32:51.410 03:14:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 365152 00:32:57.981 { 00:32:57.981 "results": [ 00:32:57.981 { 00:32:57.981 "job": "NVMe0n1", 00:32:57.981 "core_mask": "0x1", 00:32:57.981 "workload": "verify", 00:32:57.981 "status": "finished", 00:32:57.981 "verify_range": { 00:32:57.981 "start": 0, 00:32:57.981 "length": 16384 00:32:57.981 }, 00:32:57.981 "queue_depth": 128, 00:32:57.981 "io_size": 4096, 00:32:57.981 "runtime": 15.008228, 00:32:57.981 "iops": 11263.355007666461, 00:32:57.981 "mibps": 43.997480498697115, 00:32:57.981 "io_failed": 10205, 00:32:57.981 "io_timeout": 0, 00:32:57.981 "avg_latency_us": 10694.637960750311, 00:32:57.981 "min_latency_us": 405.69904761904763, 00:32:57.981 "max_latency_us": 21845.333333333332 00:32:57.981 } 00:32:57.981 ], 00:32:57.981 "core_count": 1 00:32:57.981 } 00:32:57.981 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 365137 00:32:57.981 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 365137 ']' 00:32:57.981 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 365137 00:32:57.981 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:57.981 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:57.981 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 365137 00:32:57.981 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:57.981 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:57.981 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 365137' 00:32:57.981 killing process with pid 365137 00:32:57.981 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 365137 00:32:57.981 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 365137 00:32:57.981 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:57.981 [2024-12-14 03:13:56.254994] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:57.981 [2024-12-14 03:13:56.255046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid365137 ] 00:32:57.981 [2024-12-14 03:13:56.329800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.981 [2024-12-14 03:13:56.352349] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.981 Running I/O for 15 seconds... 00:32:57.981 11446.00 IOPS, 44.71 MiB/s [2024-12-14T02:14:13.114Z] [2024-12-14 03:13:58.341194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.981 [2024-12-14 03:13:58.341229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.981 [2024-12-14 03:13:58.341244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.981 [2024-12-14 03:13:58.341251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.981 [2024-12-14 03:13:58.341261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.981 [2024-12-14 03:13:58.341268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.981 [2024-12-14 03:13:58.341276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.981 [2024-12-14 03:13:58.341283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.982 [2024-12-14 03:13:58.341654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.982 [2024-12-14 03:13:58.341669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.982 [2024-12-14 03:13:58.341684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.982 [2024-12-14 03:13:58.341698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.982 [2024-12-14 03:13:58.341713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.982 [2024-12-14 03:13:58.341727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.982 [2024-12-14 03:13:58.341742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.982 [2024-12-14 03:13:58.341759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.982 [2024-12-14 03:13:58.341775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.982 [2024-12-14 03:13:58.341791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.982 [2024-12-14 03:13:58.341806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.982 [2024-12-14 03:13:58.341820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.982 [2024-12-14 03:13:58.341835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.982 [2024-12-14 03:13:58.341850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.982 [2024-12-14 03:13:58.341864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.982 [2024-12-14 03:13:58.341879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.982 [2024-12-14 03:13:58.341894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.982 [2024-12-14 03:13:58.341902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.341908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.341916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.341922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.341930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.341936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.341944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.341951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.341959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.341967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.341976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.341982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.341990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.341998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.983 [2024-12-14 03:13:58.342491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.983 [2024-12-14 03:13:58.342498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.984 [2024-12-14 03:13:58.342512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.984 [2024-12-14 03:13:58.342529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.984 [2024-12-14 03:13:58.342543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.984 [2024-12-14 03:13:58.342557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.984 [2024-12-14 03:13:58.342571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.984 [2024-12-14 03:13:58.342586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.984 [2024-12-14 03:13:58.342606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.984 [2024-12-14 03:13:58.342621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.984 [2024-12-14 03:13:58.342651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101464 len:8 PRP1 0x0 PRP2 0x0 00:32:57.984 [2024-12-14 03:13:58.342657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.984 [2024-12-14 03:13:58.342671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.984 [2024-12-14 03:13:58.342677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101472 len:8 PRP1 0x0 PRP2 0x0 00:32:57.984 [2024-12-14 03:13:58.342684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.984 [2024-12-14 03:13:58.342696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.984 [2024-12-14 03:13:58.342701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101480 len:8 PRP1 0x0 PRP2 0x0 00:32:57.984 [2024-12-14 03:13:58.342707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.984 [2024-12-14 03:13:58.342719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.984 [2024-12-14 03:13:58.342724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101488 len:8 PRP1 0x0 PRP2 0x0 00:32:57.984 [2024-12-14 03:13:58.342730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.984 [2024-12-14 03:13:58.342744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.984 [2024-12-14 03:13:58.342749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101496 len:8 PRP1 0x0 PRP2 0x0 00:32:57.984 [2024-12-14 03:13:58.342755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.984 [2024-12-14 03:13:58.342766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.984 [2024-12-14 03:13:58.342771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101504 len:8 PRP1 0x0 PRP2 0x0 00:32:57.984 [2024-12-14 03:13:58.342777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.984 [2024-12-14 03:13:58.342789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.984 [2024-12-14 03:13:58.342794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101512 len:8 PRP1 0x0 PRP2 0x0 00:32:57.984 [2024-12-14 03:13:58.342800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.984 [2024-12-14 03:13:58.342812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.984 [2024-12-14 03:13:58.342818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101520 len:8 PRP1 0x0 PRP2 0x0 00:32:57.984 [2024-12-14 03:13:58.342825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.984 [2024-12-14 03:13:58.342836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.984 [2024-12-14 03:13:58.342842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101528 len:8 PRP1 0x0 PRP2 0x0 00:32:57.984 [2024-12-14 03:13:58.342848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.984 [2024-12-14 03:13:58.342859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.984 [2024-12-14 03:13:58.342864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101536 len:8 PRP1 0x0 PRP2 0x0 00:32:57.984 [2024-12-14 03:13:58.342870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.984 [2024-12-14 03:13:58.342881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.984 [2024-12-14 03:13:58.342887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101544 len:8 PRP1 0x0 PRP2 0x0 00:32:57.984 [2024-12-14 03:13:58.342893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.984 [2024-12-14 03:13:58.342904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.984 [2024-12-14 03:13:58.342910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101552 len:8 PRP1 0x0 PRP2 0x0 00:32:57.984 [2024-12-14 03:13:58.342916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.984 [2024-12-14 03:13:58.342927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.984 [2024-12-14 03:13:58.342933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101560 len:8 PRP1 0x0 PRP2 0x0 00:32:57.984 [2024-12-14 03:13:58.342938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.984 [2024-12-14 03:13:58.342950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.984 [2024-12-14 03:13:58.342955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101568 len:8 PRP1 0x0 PRP2 0x0 00:32:57.984 [2024-12-14 03:13:58.342961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.984 [2024-12-14 03:13:58.342972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.984 [2024-12-14 03:13:58.342977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101576 len:8 PRP1 0x0 PRP2 0x0 00:32:57.984 [2024-12-14 03:13:58.342983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.342989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.984 [2024-12-14 03:13:58.342994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.984 [2024-12-14 03:13:58.343001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101584 len:8 PRP1 0x0 PRP2 0x0 00:32:57.984 [2024-12-14 03:13:58.343007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.343014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.984 [2024-12-14 03:13:58.343018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.984 [2024-12-14 03:13:58.343023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101592 len:8 PRP1 0x0 PRP2 0x0 00:32:57.984 [2024-12-14 03:13:58.343029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.343036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.984 [2024-12-14 03:13:58.343040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.984 [2024-12-14 03:13:58.343046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101600 len:8 PRP1 0x0 PRP2 0x0 00:32:57.984 [2024-12-14 03:13:58.343052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.343059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.984 [2024-12-14 03:13:58.343064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.984 [2024-12-14 03:13:58.343069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101608 len:8 PRP1 0x0 PRP2 0x0 00:32:57.984 [2024-12-14 03:13:58.343075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.984 [2024-12-14 03:13:58.343081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.985 [2024-12-14 03:13:58.343087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.985 [2024-12-14 03:13:58.343092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101616 len:8 PRP1 0x0 PRP2 0x0 00:32:57.985 [2024-12-14 03:13:58.343098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:13:58.343105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.985 [2024-12-14 03:13:58.343110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.985 [2024-12-14 03:13:58.343115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101624 len:8 PRP1 0x0 PRP2 0x0 00:32:57.985 [2024-12-14 03:13:58.343121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:13:58.343128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.985 [2024-12-14 03:13:58.343132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.985 [2024-12-14 03:13:58.343138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101632 len:8 PRP1 0x0 PRP2 0x0 00:32:57.985 [2024-12-14 03:13:58.343143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:13:58.343150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.985 [2024-12-14 03:13:58.343155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.985 [2024-12-14 03:13:58.343160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101640 len:8 PRP1 0x0 PRP2 0x0 00:32:57.985 [2024-12-14 03:13:58.343166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:13:58.343172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.985 [2024-12-14 03:13:58.343177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.985 [2024-12-14 03:13:58.343183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101648 len:8 PRP1 0x0 PRP2 0x0 00:32:57.985 [2024-12-14 03:13:58.343189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:13:58.343196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.985 [2024-12-14 03:13:58.343200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.985 [2024-12-14 03:13:58.343205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101656 len:8 PRP1 0x0 PRP2 0x0 00:32:57.985 [2024-12-14 03:13:58.343213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:13:58.343220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.985 [2024-12-14 03:13:58.343225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.985 [2024-12-14 03:13:58.343230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101664 len:8 PRP1 0x0 PRP2 0x0 00:32:57.985 [2024-12-14 03:13:58.343236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:13:58.343242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.985 [2024-12-14 03:13:58.343247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.985 [2024-12-14 03:13:58.343252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101672 len:8 PRP1 0x0 PRP2 0x0 00:32:57.985 [2024-12-14 03:13:58.343258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:13:58.343266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.985 [2024-12-14 03:13:58.343271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.985 [2024-12-14 03:13:58.343276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101680 len:8 PRP1 0x0 PRP2 0x0 00:32:57.985 [2024-12-14 03:13:58.343282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:13:58.343289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.985 [2024-12-14 03:13:58.343293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.985 [2024-12-14 03:13:58.343299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100888 len:8 PRP1 0x0 PRP2 0x0 00:32:57.985 [2024-12-14 03:13:58.343304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:13:58.343311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.985 [2024-12-14 03:13:58.343319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.985 [2024-12-14 03:13:58.343324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100896 len:8 PRP1 0x0 PRP2 0x0 00:32:57.985 [2024-12-14 03:13:58.343331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:13:58.343337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.985 [2024-12-14 03:13:58.343342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.985 [2024-12-14 03:13:58.343347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100904 len:8 PRP1 0x0 PRP2 0x0 00:32:57.985 [2024-12-14 03:13:58.354493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:13:58.354507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.985 [2024-12-14 03:13:58.354513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.985 [2024-12-14 03:13:58.354520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100912 len:8 PRP1 0x0 PRP2 0x0 00:32:57.985 [2024-12-14 03:13:58.354526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:13:58.354534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.985 [2024-12-14 03:13:58.354539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.985 [2024-12-14 03:13:58.354544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100920 len:8 PRP1 0x0 PRP2 0x0 00:32:57.985 [2024-12-14 03:13:58.354551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:13:58.354558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.985 [2024-12-14 03:13:58.354563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.985 [2024-12-14 03:13:58.354568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100928 len:8 PRP1 0x0 PRP2 0x0 00:32:57.985 [2024-12-14 03:13:58.354575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:13:58.354582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.985 [2024-12-14 03:13:58.354587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.985 [2024-12-14 03:13:58.354592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100936 len:8 PRP1 0x0 PRP2 0x0 00:32:57.985 [2024-12-14 03:13:58.354601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:13:58.354643] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:57.985 [2024-12-14 03:13:58.354668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.985 [2024-12-14 03:13:58.354675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:13:58.354683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.985 [2024-12-14 03:13:58.354690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:13:58.354698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.985 [2024-12-14 03:13:58.354705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:13:58.354712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.985 [2024-12-14 03:13:58.354718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:13:58.354725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:57.985 [2024-12-14 03:13:58.354754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1423460 (9): Bad file descriptor 00:32:57.985 [2024-12-14 03:13:58.357693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:57.985 [2024-12-14 03:13:58.384491] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:57.985 11210.00 IOPS, 43.79 MiB/s [2024-12-14T02:14:13.118Z] 11302.33 IOPS, 44.15 MiB/s [2024-12-14T02:14:13.118Z] 11361.75 IOPS, 44.38 MiB/s [2024-12-14T02:14:13.118Z] [2024-12-14 03:14:01.889758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.985 [2024-12-14 03:14:01.889793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:14:01.889807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.985 [2024-12-14 03:14:01.889815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:14:01.889824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.985 [2024-12-14 03:14:01.889830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:14:01.889839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.985 [2024-12-14 03:14:01.889845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.985 [2024-12-14 03:14:01.889854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.985 [2024-12-14 03:14:01.889861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.889869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.986 [2024-12-14 03:14:01.889880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.889888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.986 [2024-12-14 03:14:01.889895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.889903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.986 [2024-12-14 03:14:01.889910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.889918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.986 [2024-12-14 03:14:01.889925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.889933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.986 [2024-12-14 03:14:01.889939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.889947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.889954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.889962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.889969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.889978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.889985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.889993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.986 [2024-12-14 03:14:01.890402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.986 [2024-12-14 03:14:01.890408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.987 [2024-12-14 03:14:01.890422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.987 [2024-12-14 03:14:01.890438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.987 [2024-12-14 03:14:01.890453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.987 [2024-12-14 03:14:01.890467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.987 [2024-12-14 03:14:01.890481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.987 [2024-12-14 03:14:01.890494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.987 [2024-12-14 03:14:01.890508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.987 [2024-12-14 03:14:01.890523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.987 [2024-12-14 03:14:01.890537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.987 [2024-12-14 03:14:01.890551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.987 [2024-12-14 03:14:01.890565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.987 [2024-12-14 03:14:01.890579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.987 [2024-12-14 03:14:01.890593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.987 [2024-12-14 03:14:01.890607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.987 [2024-12-14 03:14:01.890623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.987 [2024-12-14 03:14:01.890637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.987 [2024-12-14 03:14:01.890665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39664 len:8 PRP1 0x0 PRP2 0x0 00:32:57.987 [2024-12-14 03:14:01.890671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.987 [2024-12-14 03:14:01.890685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.987 [2024-12-14 03:14:01.890690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39672 len:8 PRP1 0x0 PRP2 0x0 00:32:57.987 [2024-12-14 03:14:01.890697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.987 [2024-12-14 03:14:01.890708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.987 [2024-12-14 03:14:01.890713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39680 len:8 PRP1 0x0 PRP2 0x0 00:32:57.987 [2024-12-14 03:14:01.890720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.987 [2024-12-14 03:14:01.890730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.987 [2024-12-14 03:14:01.890736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39688 len:8 PRP1 0x0 PRP2 0x0 00:32:57.987 [2024-12-14 03:14:01.890743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.987 [2024-12-14 03:14:01.890754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.987 [2024-12-14 03:14:01.890759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39696 len:8 PRP1 0x0 PRP2 0x0 00:32:57.987 [2024-12-14 03:14:01.890765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.987 [2024-12-14 03:14:01.890776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.987 [2024-12-14 03:14:01.890781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39704 len:8 PRP1 0x0 PRP2 0x0 00:32:57.987 [2024-12-14 03:14:01.890787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.987 [2024-12-14 03:14:01.890799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.987 [2024-12-14 03:14:01.890804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39712 len:8 PRP1 0x0 PRP2 0x0 00:32:57.987 [2024-12-14 03:14:01.890812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.987 [2024-12-14 03:14:01.890824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.987 [2024-12-14 03:14:01.890829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39720 len:8 PRP1 0x0 PRP2 0x0 00:32:57.987 [2024-12-14 03:14:01.890835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.987 [2024-12-14 03:14:01.890846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.987 [2024-12-14 03:14:01.890851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39728 len:8 PRP1 0x0 PRP2 0x0 00:32:57.987 [2024-12-14 03:14:01.890857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.987 [2024-12-14 03:14:01.890868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.987 [2024-12-14 03:14:01.890874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39736 len:8 PRP1 0x0 PRP2 0x0 00:32:57.987 [2024-12-14 03:14:01.890880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.987 [2024-12-14 03:14:01.890891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.987 [2024-12-14 03:14:01.890896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39744 len:8 PRP1 0x0 PRP2 0x0 00:32:57.987 [2024-12-14 03:14:01.890902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.987 [2024-12-14 03:14:01.890913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.987 [2024-12-14 03:14:01.890918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39752 len:8 PRP1 0x0 PRP2 0x0 00:32:57.987 [2024-12-14 03:14:01.890925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.987 [2024-12-14 03:14:01.890936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.987 [2024-12-14 03:14:01.890942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39760 len:8 PRP1 0x0 PRP2 0x0 00:32:57.987 [2024-12-14 03:14:01.890948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.987 [2024-12-14 03:14:01.890959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.987 [2024-12-14 03:14:01.890964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39768 len:8 PRP1 0x0 PRP2 0x0 00:32:57.987 [2024-12-14 03:14:01.890971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.987 [2024-12-14 03:14:01.890977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.987 [2024-12-14 03:14:01.890982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.987 [2024-12-14 03:14:01.890988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39776 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.890994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39784 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.891017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39792 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.891041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39800 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.891063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39808 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.891085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39816 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.891107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39824 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.891130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39832 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.891152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39840 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.891176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39848 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.891198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39856 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.891225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39864 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.891249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39872 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.891271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39880 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.891294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39888 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.891320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39896 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.891342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39904 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.891369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39912 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.891391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39920 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.891418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39928 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.891440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39936 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.891463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39944 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.891486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39952 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.891508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39960 len:8 PRP1 0x0 PRP2 0x0 00:32:57.988 [2024-12-14 03:14:01.891532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.988 [2024-12-14 03:14:01.891538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.988 [2024-12-14 03:14:01.891543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.988 [2024-12-14 03:14:01.891548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39968 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.891554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.891561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.891566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.891571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39976 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.891577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.891584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.891589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.891594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39984 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.891600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.891606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.891611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.891616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39992 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.891622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.891628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.891633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.891638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40000 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.891644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.891650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.891656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.891662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40008 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.891668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.891674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.891679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.891684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40016 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.891690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.891697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.891703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.891708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40024 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.891714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.891720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.891725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.891730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40032 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.891736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.891743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.891747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.891752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40040 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.891758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.891766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.891770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.891775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40048 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.891782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.902719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.902728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.902734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40056 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.902742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.902748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.902753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.902758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40064 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.902764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.902771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.902776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.902781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40072 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.902787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.902794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.902798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.902803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40080 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.902809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.902817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.902822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.902827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40088 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.902833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.902840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.902844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.902849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40096 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.902855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.902861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.902866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.902871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40104 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.902877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.902884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.902889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.902894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40112 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.902900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.902906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.902911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.902916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40120 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.902921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.902928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.902932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.902938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40128 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.902943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.902950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.902954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.902959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40136 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.902965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.902972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.902977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.902982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40144 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.902989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.902996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.903000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.903005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40152 len:8 PRP1 0x0 PRP2 0x0 00:32:57.989 [2024-12-14 03:14:01.903011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.989 [2024-12-14 03:14:01.903018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.989 [2024-12-14 03:14:01.903022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.989 [2024-12-14 03:14:01.903028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40160 len:8 PRP1 0x0 PRP2 0x0 00:32:57.990 [2024-12-14 03:14:01.903034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:01.903040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.990 [2024-12-14 03:14:01.903045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.990 [2024-12-14 03:14:01.903050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40168 len:8 PRP1 0x0 PRP2 0x0 00:32:57.990 [2024-12-14 03:14:01.903056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:01.903062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.990 [2024-12-14 03:14:01.903066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.990 [2024-12-14 03:14:01.903072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40176 len:8 PRP1 0x0 PRP2 0x0 00:32:57.990 [2024-12-14 03:14:01.903078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:01.903084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.990 [2024-12-14 03:14:01.903089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.990 [2024-12-14 03:14:01.903094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40184 len:8 PRP1 0x0 PRP2 0x0 00:32:57.990 [2024-12-14 03:14:01.903100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:01.903106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.990 [2024-12-14 03:14:01.903111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.990 [2024-12-14 03:14:01.903116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40192 len:8 PRP1 0x0 PRP2 0x0 00:32:57.990 [2024-12-14 03:14:01.903122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:01.903128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.990 [2024-12-14 03:14:01.903133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.990 [2024-12-14 03:14:01.903139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40200 len:8 PRP1 0x0 PRP2 0x0 00:32:57.990 [2024-12-14 03:14:01.903144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:01.903151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.990 [2024-12-14 03:14:01.903155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.990 [2024-12-14 03:14:01.903162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40208 len:8 PRP1 0x0 PRP2 0x0 00:32:57.990 [2024-12-14 03:14:01.903168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:01.903174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.990 [2024-12-14 03:14:01.903179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.990 [2024-12-14 03:14:01.903184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40216 len:8 PRP1 0x0 PRP2 0x0 00:32:57.990 [2024-12-14 03:14:01.903191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:01.903232] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:57.990 [2024-12-14 03:14:01.903255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.990 [2024-12-14 03:14:01.903262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:01.903270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.990 [2024-12-14 03:14:01.903276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:01.903283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.990 [2024-12-14 03:14:01.903290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:01.903296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.990 [2024-12-14 03:14:01.903303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:01.903310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:57.990 [2024-12-14 03:14:01.903345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1423460 (9): Bad file descriptor 00:32:57.990 [2024-12-14 03:14:01.906660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:57.990 [2024-12-14 03:14:02.057939] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:32:57.990 10989.40 IOPS, 42.93 MiB/s [2024-12-14T02:14:13.123Z] 11074.83 IOPS, 43.26 MiB/s [2024-12-14T02:14:13.123Z] 11149.14 IOPS, 43.55 MiB/s [2024-12-14T02:14:13.123Z] 11185.62 IOPS, 43.69 MiB/s [2024-12-14T02:14:13.123Z] 11227.33 IOPS, 43.86 MiB/s [2024-12-14T02:14:13.123Z] [2024-12-14 03:14:06.317381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.990 [2024-12-14 03:14:06.317416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:06.317431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.990 [2024-12-14 03:14:06.317439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:06.317448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.990 [2024-12-14 03:14:06.317455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:06.317464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.990 [2024-12-14 03:14:06.317474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:06.317483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.990 [2024-12-14 03:14:06.317489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:06.317497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.990 [2024-12-14 03:14:06.317504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:06.317512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.990 [2024-12-14 03:14:06.317518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:06.317527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.990 [2024-12-14 03:14:06.317533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:06.317541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.990 [2024-12-14 03:14:06.317547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:06.317555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.990 [2024-12-14 03:14:06.317562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:06.317570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.990 [2024-12-14 03:14:06.317576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:06.317584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.990 [2024-12-14 03:14:06.317590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:06.317598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.990 [2024-12-14 03:14:06.317604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:06.317612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.990 [2024-12-14 03:14:06.317619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:06.317627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.990 [2024-12-14 03:14:06.317634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:06.317642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.990 [2024-12-14 03:14:06.317648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.990 [2024-12-14 03:14:06.317658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.991 [2024-12-14 03:14:06.317664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.317673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.991 [2024-12-14 03:14:06.317680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.317688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.991 [2024-12-14 03:14:06.317694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.317702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.991 [2024-12-14 03:14:06.317709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.317717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.991 [2024-12-14 03:14:06.317724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.317731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.991 [2024-12-14 03:14:06.317738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.317745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.991 [2024-12-14 03:14:06.317752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.317760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.991 [2024-12-14 03:14:06.317767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.317775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.991 [2024-12-14 03:14:06.317781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.317789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.991 [2024-12-14 03:14:06.317796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.317804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.991 [2024-12-14 03:14:06.317811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.317819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.317826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.317834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.317845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.317853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.317860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.317868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.317874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.317882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.317889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.317897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.317903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.317912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.317918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.317926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.317933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.317940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.317947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.317955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.317961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.317969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.317975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.317983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.317989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.317997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.318003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.318011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.318018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.318027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.318034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.318042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.318048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.318056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.318063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.318070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.318077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.318084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.318091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.318099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.318105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.318113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.318119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.318127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.318133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.318141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.318148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.318156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.318162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.318170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.318177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.318184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.318191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.318198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.318205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.318214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.318220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.318228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.318235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.318243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.991 [2024-12-14 03:14:06.318249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.991 [2024-12-14 03:14:06.318257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.992 [2024-12-14 03:14:06.318371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.992 [2024-12-14 03:14:06.318386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.992 [2024-12-14 03:14:06.318402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.992 [2024-12-14 03:14:06.318845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.992 [2024-12-14 03:14:06.318853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.993 [2024-12-14 03:14:06.318859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.318882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.318888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97024 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.318894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.318904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.993 [2024-12-14 03:14:06.318916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.318921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97032 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.318927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.318934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.993 [2024-12-14 03:14:06.318939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.318944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97040 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.318950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.318956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.993 [2024-12-14 03:14:06.318961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.318968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97048 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.318974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.318981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.993 [2024-12-14 03:14:06.318986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.318991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97056 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.318997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.319003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.993 [2024-12-14 03:14:06.319008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.319013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97064 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.319019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.319026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.993 [2024-12-14 03:14:06.319030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.319035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97072 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.319043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.319049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.993 [2024-12-14 03:14:06.319054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.319059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97080 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.319065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.319072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.993 [2024-12-14 03:14:06.319077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.319082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97088 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.319088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.319094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.993 [2024-12-14 03:14:06.319100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.319105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97096 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.319111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.319118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.993 [2024-12-14 03:14:06.319123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.319128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97104 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.319134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.319140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.993 [2024-12-14 03:14:06.319146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.319151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97112 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.319157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.319163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.993 [2024-12-14 03:14:06.319168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.319173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97120 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.319179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.319186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.993 [2024-12-14 03:14:06.319190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.319195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97128 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.319201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.319208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.993 [2024-12-14 03:14:06.319212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.319217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97136 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.319225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.319232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.993 [2024-12-14 03:14:06.319236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.319242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97144 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.319248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.319254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.993 [2024-12-14 03:14:06.319259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.319264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97152 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.319270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.319277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.993 [2024-12-14 03:14:06.319283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.319288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97160 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.319294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.319300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.993 [2024-12-14 03:14:06.319305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.319310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97168 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.319455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.319463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.993 [2024-12-14 03:14:06.319468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.319474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97176 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.319480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.319486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.993 [2024-12-14 03:14:06.319491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.319496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97184 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.319502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.319508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.993 [2024-12-14 03:14:06.319513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.319518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97192 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.319525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.319532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.993 [2024-12-14 03:14:06.319536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.319542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97200 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.319549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.993 [2024-12-14 03:14:06.319555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.993 [2024-12-14 03:14:06.319560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.993 [2024-12-14 03:14:06.319565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97208 len:8 PRP1 0x0 PRP2 0x0 00:32:57.993 [2024-12-14 03:14:06.319571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.994 [2024-12-14 03:14:06.319578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.994 [2024-12-14 03:14:06.319583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.994 [2024-12-14 03:14:06.319588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97216 len:8 PRP1 0x0 PRP2 0x0 00:32:57.994 [2024-12-14 03:14:06.319594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.994 [2024-12-14 03:14:06.319600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.994 [2024-12-14 03:14:06.319606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.994 [2024-12-14 03:14:06.319611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97224 len:8 PRP1 0x0 PRP2 0x0 00:32:57.994 [2024-12-14 03:14:06.319617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.994 [2024-12-14 03:14:06.319624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.994 [2024-12-14 03:14:06.319628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.994 [2024-12-14 03:14:06.319634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97232 len:8 PRP1 0x0 PRP2 0x0 00:32:57.994 [2024-12-14 03:14:06.319641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.994 [2024-12-14 03:14:06.319648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.994 [2024-12-14 03:14:06.319652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.994 [2024-12-14 03:14:06.319657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97240 len:8 PRP1 0x0 PRP2 0x0 00:32:57.994 [2024-12-14 03:14:06.319663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.994 [2024-12-14 03:14:06.319670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.994 [2024-12-14 03:14:06.319674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.994 [2024-12-14 03:14:06.329652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97248 len:8 PRP1 0x0 PRP2 0x0 00:32:57.994 [2024-12-14 03:14:06.329666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.994 [2024-12-14 03:14:06.329716] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:57.994 [2024-12-14 03:14:06.329744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.994 [2024-12-14 03:14:06.329754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.994 [2024-12-14 03:14:06.329764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.994 [2024-12-14 03:14:06.329772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.994 [2024-12-14 03:14:06.329782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.994 [2024-12-14 03:14:06.329790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.994 [2024-12-14 03:14:06.329800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.994 [2024-12-14 03:14:06.329808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.994 [2024-12-14 03:14:06.329817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:32:57.994 [2024-12-14 03:14:06.329854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1423460 (9): Bad file descriptor 00:32:57.994 [2024-12-14 03:14:06.333592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:32:57.994 [2024-12-14 03:14:06.364751] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:32:57.994 11195.90 IOPS, 43.73 MiB/s [2024-12-14T02:14:13.127Z] 11237.64 IOPS, 43.90 MiB/s [2024-12-14T02:14:13.127Z] 11236.58 IOPS, 43.89 MiB/s [2024-12-14T02:14:13.127Z] 11248.85 IOPS, 43.94 MiB/s [2024-12-14T02:14:13.127Z] 11258.71 IOPS, 43.98 MiB/s 00:32:57.994 Latency(us) 00:32:57.994 [2024-12-14T02:14:13.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:57.994 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:57.994 Verification LBA range: start 0x0 length 0x4000 00:32:57.994 NVMe0n1 : 15.01 11263.36 44.00 679.96 0.00 10694.64 405.70 21845.33 00:32:57.994 [2024-12-14T02:14:13.127Z] =================================================================================================================== 00:32:57.994 [2024-12-14T02:14:13.127Z] Total : 11263.36 44.00 679.96 0.00 10694.64 405.70 21845.33 00:32:57.994 Received shutdown signal, test time was about 15.000000 seconds 00:32:57.994 00:32:57.994 Latency(us) 00:32:57.994 [2024-12-14T02:14:13.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:57.994 [2024-12-14T02:14:13.127Z] =================================================================================================================== 00:32:57.994 [2024-12-14T02:14:13.127Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:57.994 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:57.994 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:57.994 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:57.994 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=365343 00:32:57.994 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:57.994 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 365343 /var/tmp/bdevperf.sock 00:32:57.994 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 365343 ']' 00:32:57.994 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:57.994 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:57.994 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:57.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:57.994 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:57.994 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:57.994 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:57.994 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:57.994 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:57.994 [2024-12-14 03:14:12.929000] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:57.994 03:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:58.253 [2024-12-14 03:14:13.117581] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:58.253 03:14:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:58.512 NVMe0n1 00:32:58.512 03:14:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:58.771 00:32:58.771 03:14:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:59.029 00:32:59.029 03:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:59.029 03:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:59.288 03:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:59.547 03:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:02.834 03:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:02.834 03:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:02.834 03:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=365420 00:33:02.834 03:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:02.834 03:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 365420 00:33:03.771 { 00:33:03.771 "results": [ 00:33:03.771 { 00:33:03.771 "job": "NVMe0n1", 00:33:03.771 "core_mask": "0x1", 00:33:03.771 "workload": "verify", 00:33:03.771 "status": "finished", 00:33:03.771 "verify_range": { 00:33:03.771 "start": 0, 00:33:03.771 "length": 16384 00:33:03.771 }, 00:33:03.771 "queue_depth": 128, 00:33:03.771 "io_size": 4096, 00:33:03.771 "runtime": 1.008945, 00:33:03.771 "iops": 11334.611896585047, 00:33:03.771 "mibps": 44.27582772103534, 00:33:03.771 "io_failed": 0, 00:33:03.772 "io_timeout": 0, 00:33:03.772 "avg_latency_us": 11250.598407035428, 00:33:03.772 "min_latency_us": 2356.175238095238, 00:33:03.772 "max_latency_us": 11421.988571428572 00:33:03.772 } 00:33:03.772 ], 00:33:03.772 "core_count": 1 00:33:03.772 } 00:33:03.772 03:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:03.772 [2024-12-14 03:14:12.570368] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:03.772 [2024-12-14 03:14:12.570422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid365343 ] 00:33:03.772 [2024-12-14 03:14:12.647672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.772 [2024-12-14 03:14:12.667925] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.772 [2024-12-14 03:14:14.510445] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:03.772 [2024-12-14 03:14:14.510488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.772 [2024-12-14 03:14:14.510499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.772 [2024-12-14 03:14:14.510507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.772 [2024-12-14 03:14:14.510513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.772 [2024-12-14 03:14:14.510520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.772 [2024-12-14 03:14:14.510527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.772 [2024-12-14 03:14:14.510533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.772 [2024-12-14 03:14:14.510540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.772 [2024-12-14 03:14:14.510546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:33:03.772 [2024-12-14 03:14:14.510570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:33:03.772 [2024-12-14 03:14:14.510584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108a460 (9): Bad file descriptor 00:33:03.772 [2024-12-14 03:14:14.602471] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:33:03.772 Running I/O for 1 seconds... 00:33:03.772 11308.00 IOPS, 44.17 MiB/s 00:33:03.772 Latency(us) 00:33:03.772 [2024-12-14T02:14:18.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.772 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:03.772 Verification LBA range: start 0x0 length 0x4000 00:33:03.772 NVMe0n1 : 1.01 11334.61 44.28 0.00 0.00 11250.60 2356.18 11421.99 00:33:03.772 [2024-12-14T02:14:18.905Z] =================================================================================================================== 00:33:03.772 [2024-12-14T02:14:18.905Z] Total : 11334.61 44.28 0.00 0.00 11250.60 2356.18 11421.99 00:33:03.772 03:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:03.772 03:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:04.031 03:14:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:04.290 03:14:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:04.290 03:14:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:04.549 03:14:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:04.549 03:14:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:07.838 03:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:07.838 03:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:07.838 03:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 365343 00:33:07.838 03:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 365343 ']' 00:33:07.838 03:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 365343 00:33:07.838 03:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:07.838 03:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:07.838 03:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 365343 00:33:07.838 03:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:07.838 03:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:07.838 03:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 365343' 00:33:07.838 killing process with pid 365343 00:33:07.838 03:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 365343 00:33:07.838 03:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 365343 00:33:08.096 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:08.096 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:08.354 rmmod nvme_tcp 00:33:08.354 rmmod nvme_fabrics 00:33:08.354 rmmod nvme_keyring 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 365087 ']' 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 365087 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 365087 ']' 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 365087 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 365087 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 365087' 00:33:08.354 killing process with pid 365087 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 365087 00:33:08.354 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 365087 00:33:08.613 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:08.613 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:08.613 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:08.613 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:33:08.613 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:33:08.613 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:08.613 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:33:08.613 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:08.613 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:08.613 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.613 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:08.613 03:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.516 03:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:10.516 00:33:10.516 real 0m37.036s 00:33:10.516 user 1m57.394s 00:33:10.516 sys 0m7.867s 00:33:10.516 03:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:10.516 03:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:10.516 ************************************ 00:33:10.516 END TEST nvmf_failover 00:33:10.516 ************************************ 00:33:10.516 03:14:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:10.516 03:14:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:10.516 03:14:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:10.516 03:14:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.776 ************************************ 00:33:10.776 START TEST nvmf_host_discovery 00:33:10.776 ************************************ 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:10.776 * Looking for test storage... 00:33:10.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:10.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.776 --rc genhtml_branch_coverage=1 00:33:10.776 --rc genhtml_function_coverage=1 00:33:10.776 --rc genhtml_legend=1 00:33:10.776 --rc geninfo_all_blocks=1 00:33:10.776 --rc geninfo_unexecuted_blocks=1 00:33:10.776 00:33:10.776 ' 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:10.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.776 --rc genhtml_branch_coverage=1 00:33:10.776 --rc genhtml_function_coverage=1 00:33:10.776 --rc genhtml_legend=1 00:33:10.776 --rc geninfo_all_blocks=1 00:33:10.776 --rc geninfo_unexecuted_blocks=1 00:33:10.776 00:33:10.776 ' 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:10.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.776 --rc genhtml_branch_coverage=1 00:33:10.776 --rc genhtml_function_coverage=1 00:33:10.776 --rc genhtml_legend=1 00:33:10.776 --rc geninfo_all_blocks=1 00:33:10.776 --rc geninfo_unexecuted_blocks=1 00:33:10.776 00:33:10.776 ' 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:10.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.776 --rc genhtml_branch_coverage=1 00:33:10.776 --rc genhtml_function_coverage=1 00:33:10.776 --rc genhtml_legend=1 00:33:10.776 --rc geninfo_all_blocks=1 00:33:10.776 --rc geninfo_unexecuted_blocks=1 00:33:10.776 00:33:10.776 ' 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.776 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:10.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:33:10.777 03:14:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:17.348 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:17.348 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:17.348 Found net devices under 0000:af:00.0: cvl_0_0 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:17.348 Found net devices under 0000:af:00.1: cvl_0_1 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:17.348 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:17.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:17.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:33:17.349 00:33:17.349 --- 10.0.0.2 ping statistics --- 00:33:17.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.349 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:17.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:17.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:33:17.349 00:33:17.349 --- 10.0.0.1 ping statistics --- 00:33:17.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.349 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=367721 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 367721 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 367721 ']' 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:17.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.349 [2024-12-14 03:14:31.737951] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:17.349 [2024-12-14 03:14:31.737996] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:17.349 [2024-12-14 03:14:31.815971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.349 [2024-12-14 03:14:31.837264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:17.349 [2024-12-14 03:14:31.837299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:17.349 [2024-12-14 03:14:31.837307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:17.349 [2024-12-14 03:14:31.837336] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:17.349 [2024-12-14 03:14:31.837342] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:17.349 [2024-12-14 03:14:31.837811] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.349 [2024-12-14 03:14:31.968799] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.349 [2024-12-14 03:14:31.980960] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.349 null0 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.349 03:14:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.349 null1 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=367748 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 367748 /tmp/host.sock 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 367748 ']' 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:17.349 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.349 [2024-12-14 03:14:32.057577] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:17.349 [2024-12-14 03:14:32.057619] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid367748 ] 00:33:17.349 [2024-12-14 03:14:32.130218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.349 [2024-12-14 03:14:32.152919] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.349 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.350 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:17.609 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.609 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:17.609 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:17.609 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:17.609 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.609 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:17.609 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.609 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:17.609 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:17.609 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.609 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:17.609 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:17.609 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.609 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.609 [2024-12-14 03:14:32.562461] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:17.609 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.609 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:17.609 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:17.609 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.609 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.609 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:17.609 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:17.610 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.869 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:33:17.869 03:14:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:18.436 [2024-12-14 03:14:33.307906] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:18.436 [2024-12-14 03:14:33.307928] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:18.436 [2024-12-14 03:14:33.307939] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:18.436 [2024-12-14 03:14:33.438334] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:18.695 [2024-12-14 03:14:33.661437] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:18.695 [2024-12-14 03:14:33.662133] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6aec60:1 started. 00:33:18.695 [2024-12-14 03:14:33.663490] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:18.695 [2024-12-14 03:14:33.663505] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:18.695 [2024-12-14 03:14:33.665774] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6aec60 was disconnected and freed. delete nvme_qpair. 00:33:18.695 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:18.695 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:18.696 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:18.696 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:18.696 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:18.696 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.696 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:18.696 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.696 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:18.696 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.696 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.696 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:18.696 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:18.696 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:18.696 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:18.696 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:18.696 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:18.696 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:18.696 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:18.696 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:18.696 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.696 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:18.696 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.696 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:18.955 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.955 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:18.955 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:18.955 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:18.955 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:18.955 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:18.955 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:18.955 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:18.955 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:18.955 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.956 [2024-12-14 03:14:33.953525] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x67d480:1 started. 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:18.956 [2024-12-14 03:14:33.956354] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x67d480 was disconnected and freed. delete nvme_qpair. 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:18.956 03:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.956 [2024-12-14 03:14:34.058453] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:18.956 [2024-12-14 03:14:34.059231] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:18.956 [2024-12-14 03:14:34.059250] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.956 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.250 [2024-12-14 03:14:34.187633] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:19.250 03:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:19.251 [2024-12-14 03:14:34.292204] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:33:19.251 [2024-12-14 03:14:34.292236] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:19.251 [2024-12-14 03:14:34.292243] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:19.251 [2024-12-14 03:14:34.292247] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:20.303 [2024-12-14 03:14:35.318230] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:20.303 [2024-12-14 03:14:35.318251] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:20.303 [2024-12-14 03:14:35.325607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:20.303 [2024-12-14 03:14:35.325635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.303 [2024-12-14 03:14:35.325643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:20.303 [2024-12-14 03:14:35.325650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.303 [2024-12-14 03:14:35.325657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:20.303 [2024-12-14 03:14:35.325663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.303 [2024-12-14 03:14:35.325670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:20.303 [2024-12-14 03:14:35.325677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.303 [2024-12-14 03:14:35.325683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680d70 is same with the state(6) to be set 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:20.303 [2024-12-14 03:14:35.335621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x680d70 (9): Bad file descriptor 00:33:20.303 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.303 [2024-12-14 03:14:35.345656] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:20.303 [2024-12-14 03:14:35.345667] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:20.303 [2024-12-14 03:14:35.345673] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:20.303 [2024-12-14 03:14:35.345678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:20.303 [2024-12-14 03:14:35.345693] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:20.303 [2024-12-14 03:14:35.345954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.303 [2024-12-14 03:14:35.345968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x680d70 with addr=10.0.0.2, port=4420 00:33:20.303 [2024-12-14 03:14:35.345976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680d70 is same with the state(6) to be set 00:33:20.303 [2024-12-14 03:14:35.345987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x680d70 (9): Bad file descriptor 00:33:20.303 [2024-12-14 03:14:35.345997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:20.303 [2024-12-14 03:14:35.346003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:20.303 [2024-12-14 03:14:35.346010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:20.303 [2024-12-14 03:14:35.346016] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:20.303 [2024-12-14 03:14:35.346023] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:20.303 [2024-12-14 03:14:35.346028] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:20.303 [2024-12-14 03:14:35.355724] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:20.303 [2024-12-14 03:14:35.355735] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:20.303 [2024-12-14 03:14:35.355739] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:20.303 [2024-12-14 03:14:35.355743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:20.303 [2024-12-14 03:14:35.355755] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:20.303 [2024-12-14 03:14:35.355909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.303 [2024-12-14 03:14:35.355919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x680d70 with addr=10.0.0.2, port=4420 00:33:20.303 [2024-12-14 03:14:35.355926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680d70 is same with the state(6) to be set 00:33:20.303 [2024-12-14 03:14:35.355935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x680d70 (9): Bad file descriptor 00:33:20.303 [2024-12-14 03:14:35.355944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:20.303 [2024-12-14 03:14:35.355951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:20.303 [2024-12-14 03:14:35.355957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:20.303 [2024-12-14 03:14:35.355963] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:20.303 [2024-12-14 03:14:35.355967] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:20.304 [2024-12-14 03:14:35.355971] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:20.304 [2024-12-14 03:14:35.365786] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:20.304 [2024-12-14 03:14:35.365801] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:20.304 [2024-12-14 03:14:35.365805] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:20.304 [2024-12-14 03:14:35.365809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:20.304 [2024-12-14 03:14:35.365822] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:20.304 [2024-12-14 03:14:35.366048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.304 [2024-12-14 03:14:35.366061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x680d70 with addr=10.0.0.2, port=4420 00:33:20.304 [2024-12-14 03:14:35.366068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680d70 is same with the state(6) to be set 00:33:20.304 [2024-12-14 03:14:35.366079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x680d70 (9): Bad file descriptor 00:33:20.304 [2024-12-14 03:14:35.366088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:20.304 [2024-12-14 03:14:35.366094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:20.304 [2024-12-14 03:14:35.366101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:20.304 [2024-12-14 03:14:35.366107] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:20.304 [2024-12-14 03:14:35.366114] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:20.304 [2024-12-14 03:14:35.366118] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:20.304 [2024-12-14 03:14:35.375854] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:20.304 [2024-12-14 03:14:35.375866] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:20.304 [2024-12-14 03:14:35.375870] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:20.304 [2024-12-14 03:14:35.375874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:20.304 [2024-12-14 03:14:35.375886] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:20.304 [2024-12-14 03:14:35.376142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.304 [2024-12-14 03:14:35.376155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x680d70 with addr=10.0.0.2, port=4420 00:33:20.304 [2024-12-14 03:14:35.376162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680d70 is same with the state(6) to be set 00:33:20.304 [2024-12-14 03:14:35.376173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x680d70 (9): Bad file descriptor 00:33:20.304 [2024-12-14 03:14:35.376182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:20.304 [2024-12-14 03:14:35.376188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:20.304 [2024-12-14 03:14:35.376194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:20.304 [2024-12-14 03:14:35.376200] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:20.304 [2024-12-14 03:14:35.376204] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:20.304 [2024-12-14 03:14:35.376208] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:20.304 [2024-12-14 03:14:35.385916] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:20.304 [2024-12-14 03:14:35.385932] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:20.304 [2024-12-14 03:14:35.385936] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:20.304 [2024-12-14 03:14:35.385940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:20.304 [2024-12-14 03:14:35.385953] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:20.304 [2024-12-14 03:14:35.386109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.304 [2024-12-14 03:14:35.386121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x680d70 with addr=10.0.0.2, port=4420 00:33:20.304 [2024-12-14 03:14:35.386128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680d70 is same with the state(6) to be set 00:33:20.304 [2024-12-14 03:14:35.386138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x680d70 (9): Bad file descriptor 00:33:20.304 [2024-12-14 03:14:35.386148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:20.304 [2024-12-14 03:14:35.386153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:20.304 [2024-12-14 03:14:35.386160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:20.304 [2024-12-14 03:14:35.386165] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:20.304 [2024-12-14 03:14:35.386170] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:20.304 [2024-12-14 03:14:35.386173] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:20.304 [2024-12-14 03:14:35.395984] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:20.304 [2024-12-14 03:14:35.395994] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:20.304 [2024-12-14 03:14:35.395998] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:20.304 [2024-12-14 03:14:35.396001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:20.304 [2024-12-14 03:14:35.396013] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:20.304 [2024-12-14 03:14:35.396254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.304 [2024-12-14 03:14:35.396265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x680d70 with addr=10.0.0.2, port=4420 00:33:20.304 [2024-12-14 03:14:35.396272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680d70 is same with the state(6) to be set 00:33:20.304 [2024-12-14 03:14:35.396282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x680d70 (9): Bad file descriptor 00:33:20.304 [2024-12-14 03:14:35.396291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:20.304 [2024-12-14 03:14:35.396297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:20.304 [2024-12-14 03:14:35.396304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:20.304 [2024-12-14 03:14:35.396309] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:20.304 [2024-12-14 03:14:35.396318] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:20.304 [2024-12-14 03:14:35.396322] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:20.304 [2024-12-14 03:14:35.404930] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:20.304 [2024-12-14 03:14:35.404944] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:20.304 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:20.593 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.593 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:33:20.593 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:20.593 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.594 03:14:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:21.638 [2024-12-14 03:14:36.726474] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:21.638 [2024-12-14 03:14:36.726490] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:21.638 [2024-12-14 03:14:36.726502] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:21.932 [2024-12-14 03:14:36.812756] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:22.227 [2024-12-14 03:14:37.079930] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:33:22.227 [2024-12-14 03:14:37.080583] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x67ca00:1 started. 00:33:22.227 [2024-12-14 03:14:37.082257] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:22.227 [2024-12-14 03:14:37.082281] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:22.227 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.227 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:22.227 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:22.227 [2024-12-14 03:14:37.084608] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x67ca00 was disconnected and freed. delete nvme_qpair. 00:33:22.227 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:22.227 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:22.227 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:22.227 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:22.227 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:22.227 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:22.227 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.227 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:22.227 request: 00:33:22.227 { 00:33:22.227 "name": "nvme", 00:33:22.227 "trtype": "tcp", 00:33:22.227 "traddr": "10.0.0.2", 00:33:22.227 "adrfam": "ipv4", 00:33:22.227 "trsvcid": "8009", 00:33:22.227 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:22.227 "wait_for_attach": true, 00:33:22.227 "method": "bdev_nvme_start_discovery", 00:33:22.227 "req_id": 1 00:33:22.227 } 00:33:22.227 Got JSON-RPC error response 00:33:22.227 response: 00:33:22.227 { 00:33:22.227 "code": -17, 00:33:22.227 "message": "File exists" 00:33:22.227 } 00:33:22.227 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:22.227 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:22.227 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:22.228 request: 00:33:22.228 { 00:33:22.228 "name": "nvme_second", 00:33:22.228 "trtype": "tcp", 00:33:22.228 "traddr": "10.0.0.2", 00:33:22.228 "adrfam": "ipv4", 00:33:22.228 "trsvcid": "8009", 00:33:22.228 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:22.228 "wait_for_attach": true, 00:33:22.228 "method": "bdev_nvme_start_discovery", 00:33:22.228 "req_id": 1 00:33:22.228 } 00:33:22.228 Got JSON-RPC error response 00:33:22.228 response: 00:33:22.228 { 00:33:22.228 "code": -17, 00:33:22.228 "message": "File exists" 00:33:22.228 } 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.228 03:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:23.323 [2024-12-14 03:14:38.301890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.323 [2024-12-14 03:14:38.301916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a4610 with addr=10.0.0.2, port=8010 00:33:23.323 [2024-12-14 03:14:38.301930] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:23.323 [2024-12-14 03:14:38.301937] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:23.323 [2024-12-14 03:14:38.301943] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:24.294 [2024-12-14 03:14:39.304310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.294 [2024-12-14 03:14:39.304337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7e7c80 with addr=10.0.0.2, port=8010 00:33:24.294 [2024-12-14 03:14:39.304349] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:24.294 [2024-12-14 03:14:39.304355] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:24.294 [2024-12-14 03:14:39.304361] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:25.231 [2024-12-14 03:14:40.306495] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:25.231 request: 00:33:25.231 { 00:33:25.231 "name": "nvme_second", 00:33:25.231 "trtype": "tcp", 00:33:25.231 "traddr": "10.0.0.2", 00:33:25.231 "adrfam": "ipv4", 00:33:25.231 "trsvcid": "8010", 00:33:25.231 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:25.231 "wait_for_attach": false, 00:33:25.231 "attach_timeout_ms": 3000, 00:33:25.231 "method": "bdev_nvme_start_discovery", 00:33:25.231 "req_id": 1 00:33:25.231 } 00:33:25.231 Got JSON-RPC error response 00:33:25.231 response: 00:33:25.231 { 00:33:25.231 "code": -110, 00:33:25.231 "message": "Connection timed out" 00:33:25.231 } 00:33:25.231 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:25.231 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:25.231 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:25.231 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:25.231 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:25.231 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:25.231 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:25.231 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:25.231 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.231 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:25.231 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:25.231 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:25.231 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.231 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:25.231 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:25.231 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 367748 00:33:25.231 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:25.231 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:25.231 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:33:25.490 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:25.490 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:33:25.490 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:25.490 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:25.490 rmmod nvme_tcp 00:33:25.490 rmmod nvme_fabrics 00:33:25.490 rmmod nvme_keyring 00:33:25.490 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:25.490 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:33:25.490 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:33:25.490 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 367721 ']' 00:33:25.490 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 367721 00:33:25.490 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 367721 ']' 00:33:25.490 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 367721 00:33:25.490 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:33:25.490 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:25.490 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 367721 00:33:25.490 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:25.490 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:25.490 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 367721' 00:33:25.490 killing process with pid 367721 00:33:25.490 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 367721 00:33:25.490 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 367721 00:33:25.748 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:25.748 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:25.748 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:25.748 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:33:25.748 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:33:25.748 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:25.748 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:33:25.748 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:25.748 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:25.748 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.748 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:25.748 03:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.654 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:27.654 00:33:27.654 real 0m17.017s 00:33:27.654 user 0m20.465s 00:33:27.654 sys 0m5.650s 00:33:27.654 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:27.654 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:27.654 ************************************ 00:33:27.654 END TEST nvmf_host_discovery 00:33:27.654 ************************************ 00:33:27.654 03:14:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:27.654 03:14:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:27.654 03:14:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:27.654 03:14:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.654 ************************************ 00:33:27.654 START TEST nvmf_host_multipath_status 00:33:27.654 ************************************ 00:33:27.654 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:27.913 * Looking for test storage... 00:33:27.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:27.913 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:27.913 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:33:27.913 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:27.913 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:27.913 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:27.913 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:27.913 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:27.913 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:33:27.913 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:33:27.913 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:33:27.913 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:27.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.914 --rc genhtml_branch_coverage=1 00:33:27.914 --rc genhtml_function_coverage=1 00:33:27.914 --rc genhtml_legend=1 00:33:27.914 --rc geninfo_all_blocks=1 00:33:27.914 --rc geninfo_unexecuted_blocks=1 00:33:27.914 00:33:27.914 ' 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:27.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.914 --rc genhtml_branch_coverage=1 00:33:27.914 --rc genhtml_function_coverage=1 00:33:27.914 --rc genhtml_legend=1 00:33:27.914 --rc geninfo_all_blocks=1 00:33:27.914 --rc geninfo_unexecuted_blocks=1 00:33:27.914 00:33:27.914 ' 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:27.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.914 --rc genhtml_branch_coverage=1 00:33:27.914 --rc genhtml_function_coverage=1 00:33:27.914 --rc genhtml_legend=1 00:33:27.914 --rc geninfo_all_blocks=1 00:33:27.914 --rc geninfo_unexecuted_blocks=1 00:33:27.914 00:33:27.914 ' 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:27.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.914 --rc genhtml_branch_coverage=1 00:33:27.914 --rc genhtml_function_coverage=1 00:33:27.914 --rc genhtml_legend=1 00:33:27.914 --rc geninfo_all_blocks=1 00:33:27.914 --rc geninfo_unexecuted_blocks=1 00:33:27.914 00:33:27.914 ' 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:27.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:27.914 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:27.915 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:27.915 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.915 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:27.915 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.915 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:27.915 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:27.915 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:33:27.915 03:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:34.483 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:34.483 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:33:34.483 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:34.483 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:34.483 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:34.483 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:34.483 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:34.484 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:34.484 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:34.484 Found net devices under 0000:af:00.0: cvl_0_0 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:34.484 Found net devices under 0000:af:00.1: cvl_0_1 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:34.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:34.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:33:34.484 00:33:34.484 --- 10.0.0.2 ping statistics --- 00:33:34.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:34.484 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:34.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:34.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:33:34.484 00:33:34.484 --- 10.0.0.1 ping statistics --- 00:33:34.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:34.484 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:34.484 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:34.485 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:34.485 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:33:34.485 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:34.485 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:34.485 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:34.485 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=370288 00:33:34.485 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 370288 00:33:34.485 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:34.485 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 370288 ']' 00:33:34.485 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:34.485 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:34.485 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:34.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:34.485 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:34.485 03:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:34.485 [2024-12-14 03:14:48.856476] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:34.485 [2024-12-14 03:14:48.856521] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:34.485 [2024-12-14 03:14:48.933752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:34.485 [2024-12-14 03:14:48.955132] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:34.485 [2024-12-14 03:14:48.955167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:34.485 [2024-12-14 03:14:48.955174] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:34.485 [2024-12-14 03:14:48.955180] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:34.485 [2024-12-14 03:14:48.955184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:34.485 [2024-12-14 03:14:48.956271] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:34.485 [2024-12-14 03:14:48.956272] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.485 03:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:34.485 03:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:34.485 03:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:34.485 03:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:34.485 03:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:34.485 03:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:34.485 03:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=370288 00:33:34.485 03:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:34.485 [2024-12-14 03:14:49.243410] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:34.485 03:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:34.485 Malloc0 00:33:34.485 03:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:34.744 03:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:35.003 03:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:35.003 [2024-12-14 03:14:50.035388] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:35.003 03:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:35.261 [2024-12-14 03:14:50.227890] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:35.261 03:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=370329 00:33:35.261 03:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:35.261 03:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:35.261 03:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 370329 /var/tmp/bdevperf.sock 00:33:35.261 03:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 370329 ']' 00:33:35.261 03:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:35.261 03:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:35.261 03:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:35.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:35.261 03:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:35.261 03:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:35.520 03:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:35.520 03:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:35.520 03:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:35.779 03:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:36.038 Nvme0n1 00:33:36.038 03:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:36.297 Nvme0n1 00:33:36.297 03:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:36.297 03:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:38.832 03:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:38.832 03:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:38.832 03:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:38.832 03:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:39.768 03:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:39.768 03:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:39.768 03:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.768 03:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:40.027 03:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.027 03:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:40.027 03:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.027 03:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:40.286 03:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:40.286 03:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:40.286 03:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.286 03:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:40.286 03:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.286 03:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:40.286 03:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.286 03:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:40.544 03:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.544 03:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:40.544 03:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.544 03:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:40.802 03:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.802 03:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:40.802 03:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.802 03:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:41.076 03:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.076 03:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:41.076 03:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:41.335 03:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:41.335 03:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:42.711 03:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:42.711 03:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:42.711 03:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.712 03:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:42.712 03:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:42.712 03:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:42.712 03:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.712 03:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:42.970 03:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:42.970 03:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:42.970 03:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.970 03:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:43.229 03:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.229 03:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:43.229 03:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.229 03:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:43.229 03:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.229 03:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:43.229 03:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.229 03:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:43.487 03:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.487 03:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:43.487 03:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.487 03:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:43.745 03:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.745 03:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:43.745 03:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:44.004 03:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:44.263 03:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:45.199 03:15:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:45.199 03:15:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:45.199 03:15:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.199 03:15:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:45.458 03:15:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:45.458 03:15:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:45.458 03:15:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.458 03:15:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:45.458 03:15:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:45.458 03:15:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:45.458 03:15:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:45.458 03:15:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.717 03:15:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:45.717 03:15:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:45.717 03:15:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.717 03:15:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:45.976 03:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:45.976 03:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:45.976 03:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.976 03:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:46.235 03:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.235 03:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:46.235 03:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.235 03:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:46.493 03:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.493 03:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:46.493 03:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:46.752 03:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:46.752 03:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:48.128 03:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:48.128 03:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:48.128 03:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.128 03:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:48.128 03:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.128 03:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:48.128 03:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.128 03:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:48.128 03:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:48.128 03:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:48.128 03:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.128 03:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:48.387 03:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.387 03:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:48.387 03:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.387 03:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:48.646 03:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.646 03:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:48.646 03:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.646 03:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:48.904 03:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.904 03:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:48.904 03:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.904 03:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:49.163 03:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:49.163 03:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:49.163 03:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:49.163 03:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:49.420 03:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:50.354 03:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:50.354 03:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:50.354 03:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:50.354 03:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:50.612 03:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:50.612 03:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:50.612 03:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:50.612 03:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:50.871 03:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:50.871 03:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:50.871 03:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:50.871 03:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:51.130 03:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.130 03:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:51.130 03:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.130 03:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:51.130 03:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.130 03:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:51.130 03:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.130 03:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:51.389 03:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:51.389 03:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:51.389 03:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.389 03:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:51.647 03:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:51.647 03:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:51.647 03:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:51.905 03:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:51.905 03:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:53.281 03:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:53.281 03:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:53.281 03:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.281 03:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:53.281 03:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:53.281 03:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:53.281 03:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.281 03:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:53.540 03:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:53.540 03:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:53.540 03:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.540 03:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:53.540 03:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:53.540 03:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:53.540 03:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.540 03:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:53.799 03:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:53.799 03:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:53.799 03:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.799 03:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:54.058 03:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:54.058 03:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:54.058 03:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.058 03:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:54.317 03:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.317 03:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:54.576 03:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:54.576 03:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:54.576 03:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:54.834 03:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:56.211 03:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:56.211 03:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:56.211 03:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:56.211 03:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:56.211 03:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:56.211 03:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:56.211 03:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:56.211 03:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:56.470 03:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:56.470 03:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:56.470 03:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:56.470 03:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:56.470 03:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:56.470 03:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:56.470 03:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:56.470 03:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:56.729 03:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:56.729 03:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:56.729 03:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:56.729 03:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:56.987 03:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:56.987 03:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:56.987 03:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:56.987 03:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:57.245 03:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:57.245 03:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:57.245 03:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:57.504 03:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:57.504 03:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:58.882 03:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:58.882 03:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:58.882 03:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.882 03:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:58.882 03:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:58.882 03:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:58.882 03:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.882 03:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:59.141 03:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.141 03:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:59.141 03:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.141 03:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:59.141 03:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.141 03:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:59.141 03:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:59.141 03:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.400 03:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.400 03:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:59.400 03:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.400 03:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:59.658 03:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.658 03:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:59.658 03:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.658 03:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:59.917 03:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.917 03:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:59.918 03:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:00.183 03:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:00.183 03:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:01.561 03:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:01.561 03:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:01.561 03:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.561 03:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:01.561 03:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:01.561 03:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:01.561 03:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:01.561 03:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.820 03:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:01.820 03:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:01.820 03:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.820 03:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:01.820 03:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:01.820 03:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:01.820 03:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.820 03:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:02.079 03:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.079 03:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:02.079 03:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.079 03:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:02.338 03:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.338 03:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:02.338 03:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:02.338 03:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.597 03:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.597 03:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:02.597 03:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:02.856 03:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:02.856 03:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:04.234 03:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:04.234 03:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:04.234 03:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.234 03:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:04.234 03:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:04.234 03:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:04.234 03:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.234 03:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:04.493 03:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:04.493 03:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:04.493 03:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.493 03:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:04.493 03:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:04.493 03:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:04.493 03:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.493 03:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:04.752 03:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:04.752 03:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:04.752 03:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.752 03:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:05.011 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:05.011 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:05.011 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:05.011 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:05.270 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:05.270 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 370329 00:34:05.270 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 370329 ']' 00:34:05.270 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 370329 00:34:05.270 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:05.270 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:05.270 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 370329 00:34:05.270 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:34:05.270 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:34:05.270 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 370329' 00:34:05.270 killing process with pid 370329 00:34:05.270 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 370329 00:34:05.270 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 370329 00:34:05.270 { 00:34:05.270 "results": [ 00:34:05.270 { 00:34:05.270 "job": "Nvme0n1", 00:34:05.270 "core_mask": "0x4", 00:34:05.270 "workload": "verify", 00:34:05.270 "status": "terminated", 00:34:05.270 "verify_range": { 00:34:05.270 "start": 0, 00:34:05.270 "length": 16384 00:34:05.270 }, 00:34:05.270 "queue_depth": 128, 00:34:05.270 "io_size": 4096, 00:34:05.270 "runtime": 28.762101, 00:34:05.270 "iops": 10692.47340449851, 00:34:05.270 "mibps": 41.76747423632231, 00:34:05.270 "io_failed": 0, 00:34:05.270 "io_timeout": 0, 00:34:05.270 "avg_latency_us": 11951.226601931345, 00:34:05.270 "min_latency_us": 1217.097142857143, 00:34:05.270 "max_latency_us": 3019898.88 00:34:05.270 } 00:34:05.270 ], 00:34:05.270 "core_count": 1 00:34:05.270 } 00:34:05.533 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 370329 00:34:05.533 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:05.533 [2024-12-14 03:14:50.306663] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:34:05.533 [2024-12-14 03:14:50.306718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid370329 ] 00:34:05.533 [2024-12-14 03:14:50.383771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.533 [2024-12-14 03:14:50.405785] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:34:05.533 Running I/O for 90 seconds... 00:34:05.533 11361.00 IOPS, 44.38 MiB/s [2024-12-14T02:15:20.666Z] 11484.50 IOPS, 44.86 MiB/s [2024-12-14T02:15:20.666Z] 11457.00 IOPS, 44.75 MiB/s [2024-12-14T02:15:20.666Z] 11471.75 IOPS, 44.81 MiB/s [2024-12-14T02:15:20.666Z] 11492.00 IOPS, 44.89 MiB/s [2024-12-14T02:15:20.666Z] 11506.67 IOPS, 44.95 MiB/s [2024-12-14T02:15:20.666Z] 11545.57 IOPS, 45.10 MiB/s [2024-12-14T02:15:20.667Z] 11551.75 IOPS, 45.12 MiB/s [2024-12-14T02:15:20.667Z] 11543.44 IOPS, 45.09 MiB/s [2024-12-14T02:15:20.667Z] 11547.80 IOPS, 45.11 MiB/s [2024-12-14T02:15:20.667Z] 11535.82 IOPS, 45.06 MiB/s [2024-12-14T02:15:20.667Z] 11533.00 IOPS, 45.05 MiB/s [2024-12-14T02:15:20.667Z] [2024-12-14 03:15:04.219334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:126640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.534 [2024-12-14 03:15:04.219371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:125736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:126648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.534 [2024-12-14 03:15:04.219723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.219984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.219990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.220002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.220009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.220021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.220027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.220039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.534 [2024-12-14 03:15:04.220046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:05.534 [2024-12-14 03:15:04.220060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.220067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.220078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.220085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.220097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.220104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.220116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.220123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.220135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.220142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.220154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.220161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.220173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.220180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.220192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.220198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.220210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.220217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.220229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.220236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.220248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.220254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.220266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.220272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.220286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.220293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.220304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.220311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.220327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.220334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.220346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.220353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.220366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.220372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.220992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.221002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.221017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.221024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.221039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.221046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.221061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.221067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.221082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.221089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.221104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.221111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.221126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.221132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.221149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.221155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.221170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.221177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.221192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.221199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.221213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.221220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.221234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.221241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.221255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.221262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.221279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.221286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.221301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.221308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.221327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.221334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.221349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.535 [2024-12-14 03:15:04.221356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:05.535 [2024-12-14 03:15:04.221370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.221980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.221987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.222004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.222011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.222027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.222035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.222052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.222058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.222076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.222083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.222100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.222106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.222123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.222130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.222146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.222153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.222169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.222176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.222192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.222199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:05.536 [2024-12-14 03:15:04.222216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.536 [2024-12-14 03:15:04.222223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.537 [2024-12-14 03:15:04.222245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.537 [2024-12-14 03:15:04.222269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.537 [2024-12-14 03:15:04.222292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.537 [2024-12-14 03:15:04.222320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.537 [2024-12-14 03:15:04.222344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.537 [2024-12-14 03:15:04.222369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.537 [2024-12-14 03:15:04.222393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.537 [2024-12-14 03:15:04.222415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.537 [2024-12-14 03:15:04.222439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.537 [2024-12-14 03:15:04.222463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.537 [2024-12-14 03:15:04.222486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.537 [2024-12-14 03:15:04.222510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.537 [2024-12-14 03:15:04.222532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.537 [2024-12-14 03:15:04.222555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.537 [2024-12-14 03:15:04.222578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.537 [2024-12-14 03:15:04.222601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.537 [2024-12-14 03:15:04.222626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.537 [2024-12-14 03:15:04.222648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.537 [2024-12-14 03:15:04.222672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.537 [2024-12-14 03:15:04.222694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.537 [2024-12-14 03:15:04.222717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.537 [2024-12-14 03:15:04.222741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.537 [2024-12-14 03:15:04.222764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.537 [2024-12-14 03:15:04.222787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:04.222804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.537 [2024-12-14 03:15:04.222810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:05.537 11291.00 IOPS, 44.11 MiB/s [2024-12-14T02:15:20.670Z] 10484.50 IOPS, 40.96 MiB/s [2024-12-14T02:15:20.670Z] 9785.53 IOPS, 38.22 MiB/s [2024-12-14T02:15:20.670Z] 9365.00 IOPS, 36.58 MiB/s [2024-12-14T02:15:20.670Z] 9496.00 IOPS, 37.09 MiB/s [2024-12-14T02:15:20.670Z] 9609.44 IOPS, 37.54 MiB/s [2024-12-14T02:15:20.670Z] 9787.84 IOPS, 38.23 MiB/s [2024-12-14T02:15:20.670Z] 9975.65 IOPS, 38.97 MiB/s [2024-12-14T02:15:20.670Z] 10142.57 IOPS, 39.62 MiB/s [2024-12-14T02:15:20.670Z] 10201.55 IOPS, 39.85 MiB/s [2024-12-14T02:15:20.670Z] 10257.39 IOPS, 40.07 MiB/s [2024-12-14T02:15:20.670Z] 10321.67 IOPS, 40.32 MiB/s [2024-12-14T02:15:20.670Z] 10449.68 IOPS, 40.82 MiB/s [2024-12-14T02:15:20.670Z] 10571.92 IOPS, 41.30 MiB/s [2024-12-14T02:15:20.670Z] [2024-12-14 03:15:17.943790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.537 [2024-12-14 03:15:17.943826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:17.943858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.537 [2024-12-14 03:15:17.943866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:17.943884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.537 [2024-12-14 03:15:17.943892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:17.943904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.537 [2024-12-14 03:15:17.943911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:17.943924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.537 [2024-12-14 03:15:17.943930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:17.943943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.537 [2024-12-14 03:15:17.943950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:17.943962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.537 [2024-12-14 03:15:17.943969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:17.943981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.537 [2024-12-14 03:15:17.943988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:05.537 [2024-12-14 03:15:17.944000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.944007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.944019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.944026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.944038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.944045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.944057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.944063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.944075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.944082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.944094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.944101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.944114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.944122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.944133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.944140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.944153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.944160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.944174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.944181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.944193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.538 [2024-12-14 03:15:17.944201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.944213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.538 [2024-12-14 03:15:17.944220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.944232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.538 [2024-12-14 03:15:17.944239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.945436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.945456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.945472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.945480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.945492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.945500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.945512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.945519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.945531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.945538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.945550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.945560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.945573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.538 [2024-12-14 03:15:17.945580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.945592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.945598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.945611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.945618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.945630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.945637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.945649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.945656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.945668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.945675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.945687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.945694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.945706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.945713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.945725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.538 [2024-12-14 03:15:17.945732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.538 [2024-12-14 03:15:17.945744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.539 [2024-12-14 03:15:17.945751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:05.539 [2024-12-14 03:15:17.945763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.539 [2024-12-14 03:15:17.945770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:05.539 [2024-12-14 03:15:17.945782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.539 [2024-12-14 03:15:17.945791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:05.539 [2024-12-14 03:15:17.945802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.539 [2024-12-14 03:15:17.945809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:05.539 [2024-12-14 03:15:17.945821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.539 [2024-12-14 03:15:17.945828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:05.539 [2024-12-14 03:15:17.945842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.539 [2024-12-14 03:15:17.945849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:05.539 [2024-12-14 03:15:17.945860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.539 [2024-12-14 03:15:17.945867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:05.539 [2024-12-14 03:15:17.945879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.539 [2024-12-14 03:15:17.945886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:05.539 [2024-12-14 03:15:17.945898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.539 [2024-12-14 03:15:17.945905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:05.539 [2024-12-14 03:15:17.945917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.539 [2024-12-14 03:15:17.945923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:05.539 [2024-12-14 03:15:17.945935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.539 [2024-12-14 03:15:17.945942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:05.539 [2024-12-14 03:15:17.945954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.539 [2024-12-14 03:15:17.945961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:05.539 [2024-12-14 03:15:17.945973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.539 [2024-12-14 03:15:17.945979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:05.539 [2024-12-14 03:15:17.945992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.539 [2024-12-14 03:15:17.945998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:05.539 [2024-12-14 03:15:17.946011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.539 [2024-12-14 03:15:17.946017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:05.539 [2024-12-14 03:15:17.946031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.539 [2024-12-14 03:15:17.946037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:05.539 [2024-12-14 03:15:17.946049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.539 [2024-12-14 03:15:17.946056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:05.539 [2024-12-14 03:15:17.946068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.539 [2024-12-14 03:15:17.946075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:05.539 [2024-12-14 03:15:17.946087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.539 [2024-12-14 03:15:17.946094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:05.539 [2024-12-14 03:15:17.946106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.539 [2024-12-14 03:15:17.946113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:05.539 [2024-12-14 03:15:17.946125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.539 [2024-12-14 03:15:17.946131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:05.539 [2024-12-14 03:15:17.946143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.539 [2024-12-14 03:15:17.946150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:05.539 [2024-12-14 03:15:17.946162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.539 [2024-12-14 03:15:17.946169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:05.539 10639.44 IOPS, 41.56 MiB/s [2024-12-14T02:15:20.672Z] 10672.46 IOPS, 41.69 MiB/s [2024-12-14T02:15:20.672Z] Received shutdown signal, test time was about 28.762749 seconds 00:34:05.539 00:34:05.539 Latency(us) 00:34:05.539 [2024-12-14T02:15:20.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:05.539 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:05.539 Verification LBA range: start 0x0 length 0x4000 00:34:05.539 Nvme0n1 : 28.76 10692.47 41.77 0.00 0.00 11951.23 1217.10 3019898.88 00:34:05.539 [2024-12-14T02:15:20.672Z] =================================================================================================================== 00:34:05.539 [2024-12-14T02:15:20.672Z] Total : 10692.47 41.77 0.00 0.00 11951.23 1217.10 3019898.88 00:34:05.539 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:05.539 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:05.539 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:05.539 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:05.539 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:05.539 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:34:05.539 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:05.539 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:34:05.539 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:05.539 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:05.539 rmmod nvme_tcp 00:34:05.539 rmmod nvme_fabrics 00:34:05.539 rmmod nvme_keyring 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 370288 ']' 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 370288 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 370288 ']' 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 370288 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 370288 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 370288' 00:34:05.799 killing process with pid 370288 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 370288 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 370288 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:05.799 03:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.337 03:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:08.337 00:34:08.337 real 0m40.196s 00:34:08.337 user 1m49.296s 00:34:08.337 sys 0m11.279s 00:34:08.337 03:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:08.337 03:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:08.337 ************************************ 00:34:08.337 END TEST nvmf_host_multipath_status 00:34:08.337 ************************************ 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.337 ************************************ 00:34:08.337 START TEST nvmf_discovery_remove_ifc 00:34:08.337 ************************************ 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:08.337 * Looking for test storage... 00:34:08.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:08.337 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:08.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.338 --rc genhtml_branch_coverage=1 00:34:08.338 --rc genhtml_function_coverage=1 00:34:08.338 --rc genhtml_legend=1 00:34:08.338 --rc geninfo_all_blocks=1 00:34:08.338 --rc geninfo_unexecuted_blocks=1 00:34:08.338 00:34:08.338 ' 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:08.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.338 --rc genhtml_branch_coverage=1 00:34:08.338 --rc genhtml_function_coverage=1 00:34:08.338 --rc genhtml_legend=1 00:34:08.338 --rc geninfo_all_blocks=1 00:34:08.338 --rc geninfo_unexecuted_blocks=1 00:34:08.338 00:34:08.338 ' 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:08.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.338 --rc genhtml_branch_coverage=1 00:34:08.338 --rc genhtml_function_coverage=1 00:34:08.338 --rc genhtml_legend=1 00:34:08.338 --rc geninfo_all_blocks=1 00:34:08.338 --rc geninfo_unexecuted_blocks=1 00:34:08.338 00:34:08.338 ' 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:08.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.338 --rc genhtml_branch_coverage=1 00:34:08.338 --rc genhtml_function_coverage=1 00:34:08.338 --rc genhtml_legend=1 00:34:08.338 --rc geninfo_all_blocks=1 00:34:08.338 --rc geninfo_unexecuted_blocks=1 00:34:08.338 00:34:08.338 ' 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:08.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:34:08.338 03:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:13.615 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:13.615 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:34:13.615 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:13.615 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:13.875 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:13.875 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:13.876 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:13.876 Found net devices under 0000:af:00.0: cvl_0_0 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:13.876 Found net devices under 0000:af:00.1: cvl_0_1 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:13.876 03:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:13.876 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:13.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:13.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:34:13.876 00:34:13.876 --- 10.0.0.2 ping statistics --- 00:34:13.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:13.876 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:34:14.136 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:14.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:14.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:34:14.136 00:34:14.136 --- 10.0.0.1 ping statistics --- 00:34:14.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.136 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:34:14.136 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:14.136 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:34:14.136 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:14.136 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:14.136 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:14.136 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:14.136 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:14.136 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:14.136 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:14.136 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:14.136 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:14.136 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:14.136 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:14.136 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=373655 00:34:14.136 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:14.136 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 373655 00:34:14.136 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 373655 ']' 00:34:14.136 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:14.136 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:14.136 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:14.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:14.136 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:14.136 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:14.136 [2024-12-14 03:15:29.113498] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:34:14.136 [2024-12-14 03:15:29.113544] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:14.136 [2024-12-14 03:15:29.191988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.136 [2024-12-14 03:15:29.213576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:14.136 [2024-12-14 03:15:29.213610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:14.136 [2024-12-14 03:15:29.213617] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:14.136 [2024-12-14 03:15:29.213623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:14.136 [2024-12-14 03:15:29.213628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:14.136 [2024-12-14 03:15:29.214097] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:34:14.396 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:14.396 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:14.396 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:14.396 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:14.396 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:14.396 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:14.396 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:14.396 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.396 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:14.396 [2024-12-14 03:15:29.352756] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:14.396 [2024-12-14 03:15:29.360915] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:14.396 null0 00:34:14.396 [2024-12-14 03:15:29.392909] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:14.396 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.396 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=373680 00:34:14.396 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:14.396 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 373680 /tmp/host.sock 00:34:14.396 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 373680 ']' 00:34:14.396 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:34:14.396 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:14.396 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:14.396 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:14.396 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:14.396 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:14.396 [2024-12-14 03:15:29.462537] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:34:14.396 [2024-12-14 03:15:29.462577] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid373680 ] 00:34:14.655 [2024-12-14 03:15:29.536044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.655 [2024-12-14 03:15:29.558442] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:14.655 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:14.655 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:14.655 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:14.655 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:14.655 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.655 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:14.655 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.655 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:14.655 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.655 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:14.655 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.655 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:14.655 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.655 03:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:16.034 [2024-12-14 03:15:30.749477] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:16.034 [2024-12-14 03:15:30.749502] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:16.034 [2024-12-14 03:15:30.749515] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:16.034 [2024-12-14 03:15:30.837767] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:16.034 [2024-12-14 03:15:31.019763] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:16.034 [2024-12-14 03:15:31.020495] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x111f710:1 started. 00:34:16.034 [2024-12-14 03:15:31.021766] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:16.034 [2024-12-14 03:15:31.021807] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:16.034 [2024-12-14 03:15:31.021825] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:16.034 [2024-12-14 03:15:31.021836] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:16.034 [2024-12-14 03:15:31.021855] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:16.034 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.034 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:16.034 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:16.034 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:16.034 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:16.034 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.034 [2024-12-14 03:15:31.028453] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x111f710 was disconnected and freed. delete nvme_qpair. 00:34:16.034 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:16.034 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:16.034 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:16.034 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.034 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:16.034 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:16.034 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:16.294 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:16.294 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:16.294 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:16.294 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:16.294 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.294 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:16.294 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:16.294 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:16.294 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.294 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:16.294 03:15:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:17.232 03:15:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:17.232 03:15:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:17.232 03:15:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:17.232 03:15:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.232 03:15:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:17.232 03:15:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:17.232 03:15:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:17.232 03:15:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.232 03:15:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:17.232 03:15:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:18.609 03:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:18.609 03:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:18.609 03:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:18.609 03:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.609 03:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:18.609 03:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:18.609 03:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:18.609 03:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.609 03:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:18.609 03:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:19.547 03:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:19.547 03:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:19.547 03:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:19.547 03:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.547 03:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:19.547 03:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:19.547 03:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:19.547 03:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.547 03:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:19.547 03:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:20.482 03:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:20.483 03:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:20.483 03:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:20.483 03:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.483 03:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:20.483 03:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:20.483 03:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:20.483 03:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.483 03:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:20.483 03:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:21.420 [2024-12-14 03:15:36.463351] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:21.420 [2024-12-14 03:15:36.463385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.420 [2024-12-14 03:15:36.463396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.420 [2024-12-14 03:15:36.463421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.420 [2024-12-14 03:15:36.463428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.420 [2024-12-14 03:15:36.463435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.420 [2024-12-14 03:15:36.463441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.420 [2024-12-14 03:15:36.463448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.420 [2024-12-14 03:15:36.463455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.420 [2024-12-14 03:15:36.463462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.420 [2024-12-14 03:15:36.463469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.420 [2024-12-14 03:15:36.463476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbec0 is same with the state(6) to be set 00:34:21.420 [2024-12-14 03:15:36.473372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10fbec0 (9): Bad file descriptor 00:34:21.420 [2024-12-14 03:15:36.483406] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:21.420 [2024-12-14 03:15:36.483416] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:21.420 [2024-12-14 03:15:36.483422] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:21.420 [2024-12-14 03:15:36.483426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:21.421 [2024-12-14 03:15:36.483442] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:21.421 03:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:21.421 03:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:21.421 03:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:21.421 03:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:21.421 03:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.421 03:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:21.421 03:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:22.799 [2024-12-14 03:15:37.524405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:22.799 [2024-12-14 03:15:37.524483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10fbec0 with addr=10.0.0.2, port=4420 00:34:22.799 [2024-12-14 03:15:37.524515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbec0 is same with the state(6) to be set 00:34:22.799 [2024-12-14 03:15:37.524563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10fbec0 (9): Bad file descriptor 00:34:22.799 [2024-12-14 03:15:37.525502] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:34:22.799 [2024-12-14 03:15:37.525564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:22.799 [2024-12-14 03:15:37.525587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:22.799 [2024-12-14 03:15:37.525609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:22.799 [2024-12-14 03:15:37.525628] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:22.799 [2024-12-14 03:15:37.525643] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:22.799 [2024-12-14 03:15:37.525656] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:22.799 [2024-12-14 03:15:37.525677] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:22.799 [2024-12-14 03:15:37.525691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:22.799 03:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.799 03:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:22.799 03:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:23.737 [2024-12-14 03:15:38.528198] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:23.737 [2024-12-14 03:15:38.528217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:23.737 [2024-12-14 03:15:38.528227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:23.737 [2024-12-14 03:15:38.528234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:23.737 [2024-12-14 03:15:38.528241] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:34:23.737 [2024-12-14 03:15:38.528247] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:23.737 [2024-12-14 03:15:38.528252] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:23.737 [2024-12-14 03:15:38.528256] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:23.737 [2024-12-14 03:15:38.528272] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:23.737 [2024-12-14 03:15:38.528289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:23.737 [2024-12-14 03:15:38.528297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:23.737 [2024-12-14 03:15:38.528306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:23.737 [2024-12-14 03:15:38.528317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:23.737 [2024-12-14 03:15:38.528328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:23.737 [2024-12-14 03:15:38.528335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:23.738 [2024-12-14 03:15:38.528342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:23.738 [2024-12-14 03:15:38.528348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:23.738 [2024-12-14 03:15:38.528355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:23.738 [2024-12-14 03:15:38.528361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:23.738 [2024-12-14 03:15:38.528367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:34:23.738 [2024-12-14 03:15:38.528705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eb5e0 (9): Bad file descriptor 00:34:23.738 [2024-12-14 03:15:38.529715] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:23.738 [2024-12-14 03:15:38.529726] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:34:23.738 03:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:23.738 03:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:23.738 03:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:23.738 03:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.738 03:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:23.738 03:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:23.738 03:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:23.738 03:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.738 03:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:23.738 03:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:23.738 03:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:23.738 03:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:23.738 03:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:23.738 03:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:23.738 03:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:23.738 03:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.738 03:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:23.738 03:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:23.738 03:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:23.738 03:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.738 03:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:23.738 03:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:24.676 03:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:24.676 03:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:24.676 03:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:24.676 03:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.676 03:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:24.676 03:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:24.676 03:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:24.676 03:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.676 03:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:24.676 03:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:25.613 [2024-12-14 03:15:40.543940] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:25.613 [2024-12-14 03:15:40.543959] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:25.613 [2024-12-14 03:15:40.543970] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:25.613 [2024-12-14 03:15:40.631227] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:34:25.613 [2024-12-14 03:15:40.685726] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:34:25.613 [2024-12-14 03:15:40.686342] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x10fc260:1 started. 00:34:25.613 [2024-12-14 03:15:40.687345] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:25.613 [2024-12-14 03:15:40.687375] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:25.613 [2024-12-14 03:15:40.687391] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:25.613 [2024-12-14 03:15:40.687403] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:34:25.613 [2024-12-14 03:15:40.687410] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:25.613 [2024-12-14 03:15:40.693010] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x10fc260 was disconnected and freed. delete nvme_qpair. 00:34:25.873 03:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:25.873 03:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:25.873 03:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:25.873 03:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.873 03:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:25.873 03:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:25.873 03:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:25.873 03:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.873 03:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:34:25.873 03:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:34:25.873 03:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 373680 00:34:25.873 03:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 373680 ']' 00:34:25.873 03:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 373680 00:34:25.873 03:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:34:25.873 03:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:25.873 03:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 373680 00:34:25.873 03:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:25.873 03:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:25.873 03:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 373680' 00:34:25.873 killing process with pid 373680 00:34:25.873 03:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 373680 00:34:25.873 03:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 373680 00:34:26.133 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:34:26.133 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:26.133 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:34:26.133 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:26.133 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:34:26.133 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:26.133 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:26.133 rmmod nvme_tcp 00:34:26.133 rmmod nvme_fabrics 00:34:26.133 rmmod nvme_keyring 00:34:26.133 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:26.133 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:34:26.133 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:34:26.133 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 373655 ']' 00:34:26.133 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 373655 00:34:26.133 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 373655 ']' 00:34:26.133 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 373655 00:34:26.133 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:34:26.133 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:26.133 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 373655 00:34:26.133 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:26.133 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:26.133 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 373655' 00:34:26.133 killing process with pid 373655 00:34:26.133 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 373655 00:34:26.133 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 373655 00:34:26.392 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:26.392 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:26.392 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:26.392 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:34:26.392 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:34:26.392 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:26.392 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:34:26.392 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:26.392 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:26.392 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:26.392 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:26.392 03:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:28.300 03:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:28.300 00:34:28.300 real 0m20.322s 00:34:28.300 user 0m24.663s 00:34:28.300 sys 0m5.676s 00:34:28.300 03:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:28.300 03:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:28.300 ************************************ 00:34:28.300 END TEST nvmf_discovery_remove_ifc 00:34:28.300 ************************************ 00:34:28.300 03:15:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:28.300 03:15:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:28.300 03:15:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:28.300 03:15:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.560 ************************************ 00:34:28.560 START TEST nvmf_identify_kernel_target 00:34:28.560 ************************************ 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:28.560 * Looking for test storage... 00:34:28.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:28.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:28.560 --rc genhtml_branch_coverage=1 00:34:28.560 --rc genhtml_function_coverage=1 00:34:28.560 --rc genhtml_legend=1 00:34:28.560 --rc geninfo_all_blocks=1 00:34:28.560 --rc geninfo_unexecuted_blocks=1 00:34:28.560 00:34:28.560 ' 00:34:28.560 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:28.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:28.561 --rc genhtml_branch_coverage=1 00:34:28.561 --rc genhtml_function_coverage=1 00:34:28.561 --rc genhtml_legend=1 00:34:28.561 --rc geninfo_all_blocks=1 00:34:28.561 --rc geninfo_unexecuted_blocks=1 00:34:28.561 00:34:28.561 ' 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:28.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:28.561 --rc genhtml_branch_coverage=1 00:34:28.561 --rc genhtml_function_coverage=1 00:34:28.561 --rc genhtml_legend=1 00:34:28.561 --rc geninfo_all_blocks=1 00:34:28.561 --rc geninfo_unexecuted_blocks=1 00:34:28.561 00:34:28.561 ' 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:28.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:28.561 --rc genhtml_branch_coverage=1 00:34:28.561 --rc genhtml_function_coverage=1 00:34:28.561 --rc genhtml_legend=1 00:34:28.561 --rc geninfo_all_blocks=1 00:34:28.561 --rc geninfo_unexecuted_blocks=1 00:34:28.561 00:34:28.561 ' 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:28.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:28.561 03:15:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:35.137 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:35.137 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:35.137 Found net devices under 0000:af:00.0: cvl_0_0 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:35.137 Found net devices under 0000:af:00.1: cvl_0_1 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:35.137 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:35.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:35.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:34:35.138 00:34:35.138 --- 10.0.0.2 ping statistics --- 00:34:35.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.138 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:35.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:35.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:34:35.138 00:34:35.138 --- 10.0.0.1 ping statistics --- 00:34:35.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.138 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:35.138 03:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:37.045 Waiting for block devices as requested 00:34:37.304 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:37.304 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:37.304 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:37.563 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:37.563 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:37.563 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:37.823 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:37.823 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:37.823 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:37.823 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:38.082 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:38.082 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:38.082 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:38.082 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:38.342 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:38.342 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:38.342 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:38.602 No valid GPT data, bailing 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:38.602 00:34:38.602 Discovery Log Number of Records 2, Generation counter 2 00:34:38.602 =====Discovery Log Entry 0====== 00:34:38.602 trtype: tcp 00:34:38.602 adrfam: ipv4 00:34:38.602 subtype: current discovery subsystem 00:34:38.602 treq: not specified, sq flow control disable supported 00:34:38.602 portid: 1 00:34:38.602 trsvcid: 4420 00:34:38.602 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:38.602 traddr: 10.0.0.1 00:34:38.602 eflags: none 00:34:38.602 sectype: none 00:34:38.602 =====Discovery Log Entry 1====== 00:34:38.602 trtype: tcp 00:34:38.602 adrfam: ipv4 00:34:38.602 subtype: nvme subsystem 00:34:38.602 treq: not specified, sq flow control disable supported 00:34:38.602 portid: 1 00:34:38.602 trsvcid: 4420 00:34:38.602 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:38.602 traddr: 10.0.0.1 00:34:38.602 eflags: none 00:34:38.602 sectype: none 00:34:38.602 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:38.602 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:38.863 ===================================================== 00:34:38.863 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:38.863 ===================================================== 00:34:38.863 Controller Capabilities/Features 00:34:38.863 ================================ 00:34:38.863 Vendor ID: 0000 00:34:38.863 Subsystem Vendor ID: 0000 00:34:38.863 Serial Number: 3512f01fffd302df7a51 00:34:38.863 Model Number: Linux 00:34:38.863 Firmware Version: 6.8.9-20 00:34:38.863 Recommended Arb Burst: 0 00:34:38.863 IEEE OUI Identifier: 00 00 00 00:34:38.863 Multi-path I/O 00:34:38.863 May have multiple subsystem ports: No 00:34:38.863 May have multiple controllers: No 00:34:38.863 Associated with SR-IOV VF: No 00:34:38.863 Max Data Transfer Size: Unlimited 00:34:38.863 Max Number of Namespaces: 0 00:34:38.863 Max Number of I/O Queues: 1024 00:34:38.863 NVMe Specification Version (VS): 1.3 00:34:38.863 NVMe Specification Version (Identify): 1.3 00:34:38.863 Maximum Queue Entries: 1024 00:34:38.863 Contiguous Queues Required: No 00:34:38.863 Arbitration Mechanisms Supported 00:34:38.863 Weighted Round Robin: Not Supported 00:34:38.863 Vendor Specific: Not Supported 00:34:38.863 Reset Timeout: 7500 ms 00:34:38.863 Doorbell Stride: 4 bytes 00:34:38.863 NVM Subsystem Reset: Not Supported 00:34:38.863 Command Sets Supported 00:34:38.863 NVM Command Set: Supported 00:34:38.863 Boot Partition: Not Supported 00:34:38.863 Memory Page Size Minimum: 4096 bytes 00:34:38.863 Memory Page Size Maximum: 4096 bytes 00:34:38.863 Persistent Memory Region: Not Supported 00:34:38.863 Optional Asynchronous Events Supported 00:34:38.863 Namespace Attribute Notices: Not Supported 00:34:38.863 Firmware Activation Notices: Not Supported 00:34:38.863 ANA Change Notices: Not Supported 00:34:38.863 PLE Aggregate Log Change Notices: Not Supported 00:34:38.863 LBA Status Info Alert Notices: Not Supported 00:34:38.863 EGE Aggregate Log Change Notices: Not Supported 00:34:38.863 Normal NVM Subsystem Shutdown event: Not Supported 00:34:38.863 Zone Descriptor Change Notices: Not Supported 00:34:38.863 Discovery Log Change Notices: Supported 00:34:38.863 Controller Attributes 00:34:38.863 128-bit Host Identifier: Not Supported 00:34:38.863 Non-Operational Permissive Mode: Not Supported 00:34:38.863 NVM Sets: Not Supported 00:34:38.863 Read Recovery Levels: Not Supported 00:34:38.863 Endurance Groups: Not Supported 00:34:38.863 Predictable Latency Mode: Not Supported 00:34:38.863 Traffic Based Keep ALive: Not Supported 00:34:38.863 Namespace Granularity: Not Supported 00:34:38.863 SQ Associations: Not Supported 00:34:38.863 UUID List: Not Supported 00:34:38.863 Multi-Domain Subsystem: Not Supported 00:34:38.863 Fixed Capacity Management: Not Supported 00:34:38.863 Variable Capacity Management: Not Supported 00:34:38.863 Delete Endurance Group: Not Supported 00:34:38.863 Delete NVM Set: Not Supported 00:34:38.863 Extended LBA Formats Supported: Not Supported 00:34:38.863 Flexible Data Placement Supported: Not Supported 00:34:38.863 00:34:38.863 Controller Memory Buffer Support 00:34:38.863 ================================ 00:34:38.863 Supported: No 00:34:38.863 00:34:38.863 Persistent Memory Region Support 00:34:38.863 ================================ 00:34:38.863 Supported: No 00:34:38.863 00:34:38.863 Admin Command Set Attributes 00:34:38.863 ============================ 00:34:38.863 Security Send/Receive: Not Supported 00:34:38.863 Format NVM: Not Supported 00:34:38.863 Firmware Activate/Download: Not Supported 00:34:38.863 Namespace Management: Not Supported 00:34:38.863 Device Self-Test: Not Supported 00:34:38.863 Directives: Not Supported 00:34:38.863 NVMe-MI: Not Supported 00:34:38.863 Virtualization Management: Not Supported 00:34:38.863 Doorbell Buffer Config: Not Supported 00:34:38.863 Get LBA Status Capability: Not Supported 00:34:38.863 Command & Feature Lockdown Capability: Not Supported 00:34:38.863 Abort Command Limit: 1 00:34:38.863 Async Event Request Limit: 1 00:34:38.863 Number of Firmware Slots: N/A 00:34:38.863 Firmware Slot 1 Read-Only: N/A 00:34:38.863 Firmware Activation Without Reset: N/A 00:34:38.863 Multiple Update Detection Support: N/A 00:34:38.863 Firmware Update Granularity: No Information Provided 00:34:38.863 Per-Namespace SMART Log: No 00:34:38.863 Asymmetric Namespace Access Log Page: Not Supported 00:34:38.863 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:38.863 Command Effects Log Page: Not Supported 00:34:38.863 Get Log Page Extended Data: Supported 00:34:38.863 Telemetry Log Pages: Not Supported 00:34:38.863 Persistent Event Log Pages: Not Supported 00:34:38.863 Supported Log Pages Log Page: May Support 00:34:38.863 Commands Supported & Effects Log Page: Not Supported 00:34:38.863 Feature Identifiers & Effects Log Page:May Support 00:34:38.863 NVMe-MI Commands & Effects Log Page: May Support 00:34:38.863 Data Area 4 for Telemetry Log: Not Supported 00:34:38.863 Error Log Page Entries Supported: 1 00:34:38.863 Keep Alive: Not Supported 00:34:38.863 00:34:38.863 NVM Command Set Attributes 00:34:38.863 ========================== 00:34:38.863 Submission Queue Entry Size 00:34:38.863 Max: 1 00:34:38.863 Min: 1 00:34:38.863 Completion Queue Entry Size 00:34:38.863 Max: 1 00:34:38.863 Min: 1 00:34:38.863 Number of Namespaces: 0 00:34:38.863 Compare Command: Not Supported 00:34:38.863 Write Uncorrectable Command: Not Supported 00:34:38.864 Dataset Management Command: Not Supported 00:34:38.864 Write Zeroes Command: Not Supported 00:34:38.864 Set Features Save Field: Not Supported 00:34:38.864 Reservations: Not Supported 00:34:38.864 Timestamp: Not Supported 00:34:38.864 Copy: Not Supported 00:34:38.864 Volatile Write Cache: Not Present 00:34:38.864 Atomic Write Unit (Normal): 1 00:34:38.864 Atomic Write Unit (PFail): 1 00:34:38.864 Atomic Compare & Write Unit: 1 00:34:38.864 Fused Compare & Write: Not Supported 00:34:38.864 Scatter-Gather List 00:34:38.864 SGL Command Set: Supported 00:34:38.864 SGL Keyed: Not Supported 00:34:38.864 SGL Bit Bucket Descriptor: Not Supported 00:34:38.864 SGL Metadata Pointer: Not Supported 00:34:38.864 Oversized SGL: Not Supported 00:34:38.864 SGL Metadata Address: Not Supported 00:34:38.864 SGL Offset: Supported 00:34:38.864 Transport SGL Data Block: Not Supported 00:34:38.864 Replay Protected Memory Block: Not Supported 00:34:38.864 00:34:38.864 Firmware Slot Information 00:34:38.864 ========================= 00:34:38.864 Active slot: 0 00:34:38.864 00:34:38.864 00:34:38.864 Error Log 00:34:38.864 ========= 00:34:38.864 00:34:38.864 Active Namespaces 00:34:38.864 ================= 00:34:38.864 Discovery Log Page 00:34:38.864 ================== 00:34:38.864 Generation Counter: 2 00:34:38.864 Number of Records: 2 00:34:38.864 Record Format: 0 00:34:38.864 00:34:38.864 Discovery Log Entry 0 00:34:38.864 ---------------------- 00:34:38.864 Transport Type: 3 (TCP) 00:34:38.864 Address Family: 1 (IPv4) 00:34:38.864 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:38.864 Entry Flags: 00:34:38.864 Duplicate Returned Information: 0 00:34:38.864 Explicit Persistent Connection Support for Discovery: 0 00:34:38.864 Transport Requirements: 00:34:38.864 Secure Channel: Not Specified 00:34:38.864 Port ID: 1 (0x0001) 00:34:38.864 Controller ID: 65535 (0xffff) 00:34:38.864 Admin Max SQ Size: 32 00:34:38.864 Transport Service Identifier: 4420 00:34:38.864 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:38.864 Transport Address: 10.0.0.1 00:34:38.864 Discovery Log Entry 1 00:34:38.864 ---------------------- 00:34:38.864 Transport Type: 3 (TCP) 00:34:38.864 Address Family: 1 (IPv4) 00:34:38.864 Subsystem Type: 2 (NVM Subsystem) 00:34:38.864 Entry Flags: 00:34:38.864 Duplicate Returned Information: 0 00:34:38.864 Explicit Persistent Connection Support for Discovery: 0 00:34:38.864 Transport Requirements: 00:34:38.864 Secure Channel: Not Specified 00:34:38.864 Port ID: 1 (0x0001) 00:34:38.864 Controller ID: 65535 (0xffff) 00:34:38.864 Admin Max SQ Size: 32 00:34:38.864 Transport Service Identifier: 4420 00:34:38.864 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:38.864 Transport Address: 10.0.0.1 00:34:38.864 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:38.864 get_feature(0x01) failed 00:34:38.864 get_feature(0x02) failed 00:34:38.864 get_feature(0x04) failed 00:34:38.864 ===================================================== 00:34:38.864 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:38.864 ===================================================== 00:34:38.864 Controller Capabilities/Features 00:34:38.864 ================================ 00:34:38.864 Vendor ID: 0000 00:34:38.864 Subsystem Vendor ID: 0000 00:34:38.864 Serial Number: 52e7480c1042b21a6975 00:34:38.864 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:38.864 Firmware Version: 6.8.9-20 00:34:38.864 Recommended Arb Burst: 6 00:34:38.864 IEEE OUI Identifier: 00 00 00 00:34:38.864 Multi-path I/O 00:34:38.864 May have multiple subsystem ports: Yes 00:34:38.864 May have multiple controllers: Yes 00:34:38.864 Associated with SR-IOV VF: No 00:34:38.864 Max Data Transfer Size: Unlimited 00:34:38.864 Max Number of Namespaces: 1024 00:34:38.864 Max Number of I/O Queues: 128 00:34:38.864 NVMe Specification Version (VS): 1.3 00:34:38.864 NVMe Specification Version (Identify): 1.3 00:34:38.864 Maximum Queue Entries: 1024 00:34:38.864 Contiguous Queues Required: No 00:34:38.864 Arbitration Mechanisms Supported 00:34:38.864 Weighted Round Robin: Not Supported 00:34:38.864 Vendor Specific: Not Supported 00:34:38.864 Reset Timeout: 7500 ms 00:34:38.864 Doorbell Stride: 4 bytes 00:34:38.864 NVM Subsystem Reset: Not Supported 00:34:38.864 Command Sets Supported 00:34:38.864 NVM Command Set: Supported 00:34:38.864 Boot Partition: Not Supported 00:34:38.864 Memory Page Size Minimum: 4096 bytes 00:34:38.864 Memory Page Size Maximum: 4096 bytes 00:34:38.864 Persistent Memory Region: Not Supported 00:34:38.864 Optional Asynchronous Events Supported 00:34:38.864 Namespace Attribute Notices: Supported 00:34:38.864 Firmware Activation Notices: Not Supported 00:34:38.864 ANA Change Notices: Supported 00:34:38.864 PLE Aggregate Log Change Notices: Not Supported 00:34:38.864 LBA Status Info Alert Notices: Not Supported 00:34:38.864 EGE Aggregate Log Change Notices: Not Supported 00:34:38.864 Normal NVM Subsystem Shutdown event: Not Supported 00:34:38.864 Zone Descriptor Change Notices: Not Supported 00:34:38.864 Discovery Log Change Notices: Not Supported 00:34:38.864 Controller Attributes 00:34:38.864 128-bit Host Identifier: Supported 00:34:38.864 Non-Operational Permissive Mode: Not Supported 00:34:38.864 NVM Sets: Not Supported 00:34:38.864 Read Recovery Levels: Not Supported 00:34:38.864 Endurance Groups: Not Supported 00:34:38.864 Predictable Latency Mode: Not Supported 00:34:38.864 Traffic Based Keep ALive: Supported 00:34:38.864 Namespace Granularity: Not Supported 00:34:38.864 SQ Associations: Not Supported 00:34:38.864 UUID List: Not Supported 00:34:38.864 Multi-Domain Subsystem: Not Supported 00:34:38.864 Fixed Capacity Management: Not Supported 00:34:38.864 Variable Capacity Management: Not Supported 00:34:38.864 Delete Endurance Group: Not Supported 00:34:38.864 Delete NVM Set: Not Supported 00:34:38.864 Extended LBA Formats Supported: Not Supported 00:34:38.864 Flexible Data Placement Supported: Not Supported 00:34:38.864 00:34:38.864 Controller Memory Buffer Support 00:34:38.864 ================================ 00:34:38.864 Supported: No 00:34:38.864 00:34:38.864 Persistent Memory Region Support 00:34:38.864 ================================ 00:34:38.864 Supported: No 00:34:38.864 00:34:38.864 Admin Command Set Attributes 00:34:38.864 ============================ 00:34:38.864 Security Send/Receive: Not Supported 00:34:38.864 Format NVM: Not Supported 00:34:38.864 Firmware Activate/Download: Not Supported 00:34:38.864 Namespace Management: Not Supported 00:34:38.864 Device Self-Test: Not Supported 00:34:38.864 Directives: Not Supported 00:34:38.864 NVMe-MI: Not Supported 00:34:38.864 Virtualization Management: Not Supported 00:34:38.864 Doorbell Buffer Config: Not Supported 00:34:38.864 Get LBA Status Capability: Not Supported 00:34:38.864 Command & Feature Lockdown Capability: Not Supported 00:34:38.864 Abort Command Limit: 4 00:34:38.864 Async Event Request Limit: 4 00:34:38.864 Number of Firmware Slots: N/A 00:34:38.864 Firmware Slot 1 Read-Only: N/A 00:34:38.864 Firmware Activation Without Reset: N/A 00:34:38.864 Multiple Update Detection Support: N/A 00:34:38.864 Firmware Update Granularity: No Information Provided 00:34:38.864 Per-Namespace SMART Log: Yes 00:34:38.864 Asymmetric Namespace Access Log Page: Supported 00:34:38.865 ANA Transition Time : 10 sec 00:34:38.865 00:34:38.865 Asymmetric Namespace Access Capabilities 00:34:38.865 ANA Optimized State : Supported 00:34:38.865 ANA Non-Optimized State : Supported 00:34:38.865 ANA Inaccessible State : Supported 00:34:38.865 ANA Persistent Loss State : Supported 00:34:38.865 ANA Change State : Supported 00:34:38.865 ANAGRPID is not changed : No 00:34:38.865 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:38.865 00:34:38.865 ANA Group Identifier Maximum : 128 00:34:38.865 Number of ANA Group Identifiers : 128 00:34:38.865 Max Number of Allowed Namespaces : 1024 00:34:38.865 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:38.865 Command Effects Log Page: Supported 00:34:38.865 Get Log Page Extended Data: Supported 00:34:38.865 Telemetry Log Pages: Not Supported 00:34:38.865 Persistent Event Log Pages: Not Supported 00:34:38.865 Supported Log Pages Log Page: May Support 00:34:38.865 Commands Supported & Effects Log Page: Not Supported 00:34:38.865 Feature Identifiers & Effects Log Page:May Support 00:34:38.865 NVMe-MI Commands & Effects Log Page: May Support 00:34:38.865 Data Area 4 for Telemetry Log: Not Supported 00:34:38.865 Error Log Page Entries Supported: 128 00:34:38.865 Keep Alive: Supported 00:34:38.865 Keep Alive Granularity: 1000 ms 00:34:38.865 00:34:38.865 NVM Command Set Attributes 00:34:38.865 ========================== 00:34:38.865 Submission Queue Entry Size 00:34:38.865 Max: 64 00:34:38.865 Min: 64 00:34:38.865 Completion Queue Entry Size 00:34:38.865 Max: 16 00:34:38.865 Min: 16 00:34:38.865 Number of Namespaces: 1024 00:34:38.865 Compare Command: Not Supported 00:34:38.865 Write Uncorrectable Command: Not Supported 00:34:38.865 Dataset Management Command: Supported 00:34:38.865 Write Zeroes Command: Supported 00:34:38.865 Set Features Save Field: Not Supported 00:34:38.865 Reservations: Not Supported 00:34:38.865 Timestamp: Not Supported 00:34:38.865 Copy: Not Supported 00:34:38.865 Volatile Write Cache: Present 00:34:38.865 Atomic Write Unit (Normal): 1 00:34:38.865 Atomic Write Unit (PFail): 1 00:34:38.865 Atomic Compare & Write Unit: 1 00:34:38.865 Fused Compare & Write: Not Supported 00:34:38.865 Scatter-Gather List 00:34:38.865 SGL Command Set: Supported 00:34:38.865 SGL Keyed: Not Supported 00:34:38.865 SGL Bit Bucket Descriptor: Not Supported 00:34:38.865 SGL Metadata Pointer: Not Supported 00:34:38.865 Oversized SGL: Not Supported 00:34:38.865 SGL Metadata Address: Not Supported 00:34:38.865 SGL Offset: Supported 00:34:38.865 Transport SGL Data Block: Not Supported 00:34:38.865 Replay Protected Memory Block: Not Supported 00:34:38.865 00:34:38.865 Firmware Slot Information 00:34:38.865 ========================= 00:34:38.865 Active slot: 0 00:34:38.865 00:34:38.865 Asymmetric Namespace Access 00:34:38.865 =========================== 00:34:38.865 Change Count : 0 00:34:38.865 Number of ANA Group Descriptors : 1 00:34:38.865 ANA Group Descriptor : 0 00:34:38.865 ANA Group ID : 1 00:34:38.865 Number of NSID Values : 1 00:34:38.865 Change Count : 0 00:34:38.865 ANA State : 1 00:34:38.865 Namespace Identifier : 1 00:34:38.865 00:34:38.865 Commands Supported and Effects 00:34:38.865 ============================== 00:34:38.865 Admin Commands 00:34:38.865 -------------- 00:34:38.865 Get Log Page (02h): Supported 00:34:38.865 Identify (06h): Supported 00:34:38.865 Abort (08h): Supported 00:34:38.865 Set Features (09h): Supported 00:34:38.865 Get Features (0Ah): Supported 00:34:38.865 Asynchronous Event Request (0Ch): Supported 00:34:38.865 Keep Alive (18h): Supported 00:34:38.865 I/O Commands 00:34:38.865 ------------ 00:34:38.865 Flush (00h): Supported 00:34:38.865 Write (01h): Supported LBA-Change 00:34:38.865 Read (02h): Supported 00:34:38.865 Write Zeroes (08h): Supported LBA-Change 00:34:38.865 Dataset Management (09h): Supported 00:34:38.865 00:34:38.865 Error Log 00:34:38.865 ========= 00:34:38.865 Entry: 0 00:34:38.865 Error Count: 0x3 00:34:38.865 Submission Queue Id: 0x0 00:34:38.865 Command Id: 0x5 00:34:38.865 Phase Bit: 0 00:34:38.865 Status Code: 0x2 00:34:38.865 Status Code Type: 0x0 00:34:38.865 Do Not Retry: 1 00:34:38.865 Error Location: 0x28 00:34:38.865 LBA: 0x0 00:34:38.865 Namespace: 0x0 00:34:38.865 Vendor Log Page: 0x0 00:34:38.865 ----------- 00:34:38.865 Entry: 1 00:34:38.865 Error Count: 0x2 00:34:38.865 Submission Queue Id: 0x0 00:34:38.865 Command Id: 0x5 00:34:38.865 Phase Bit: 0 00:34:38.865 Status Code: 0x2 00:34:38.865 Status Code Type: 0x0 00:34:38.865 Do Not Retry: 1 00:34:38.865 Error Location: 0x28 00:34:38.865 LBA: 0x0 00:34:38.865 Namespace: 0x0 00:34:38.865 Vendor Log Page: 0x0 00:34:38.865 ----------- 00:34:38.865 Entry: 2 00:34:38.865 Error Count: 0x1 00:34:38.865 Submission Queue Id: 0x0 00:34:38.865 Command Id: 0x4 00:34:38.865 Phase Bit: 0 00:34:38.865 Status Code: 0x2 00:34:38.865 Status Code Type: 0x0 00:34:38.865 Do Not Retry: 1 00:34:38.865 Error Location: 0x28 00:34:38.865 LBA: 0x0 00:34:38.865 Namespace: 0x0 00:34:38.865 Vendor Log Page: 0x0 00:34:38.865 00:34:38.865 Number of Queues 00:34:38.865 ================ 00:34:38.865 Number of I/O Submission Queues: 128 00:34:38.865 Number of I/O Completion Queues: 128 00:34:38.865 00:34:38.865 ZNS Specific Controller Data 00:34:38.865 ============================ 00:34:38.865 Zone Append Size Limit: 0 00:34:38.865 00:34:38.865 00:34:38.865 Active Namespaces 00:34:38.865 ================= 00:34:38.865 get_feature(0x05) failed 00:34:38.865 Namespace ID:1 00:34:38.865 Command Set Identifier: NVM (00h) 00:34:38.865 Deallocate: Supported 00:34:38.865 Deallocated/Unwritten Error: Not Supported 00:34:38.865 Deallocated Read Value: Unknown 00:34:38.865 Deallocate in Write Zeroes: Not Supported 00:34:38.865 Deallocated Guard Field: 0xFFFF 00:34:38.865 Flush: Supported 00:34:38.865 Reservation: Not Supported 00:34:38.865 Namespace Sharing Capabilities: Multiple Controllers 00:34:38.865 Size (in LBAs): 1953525168 (931GiB) 00:34:38.865 Capacity (in LBAs): 1953525168 (931GiB) 00:34:38.865 Utilization (in LBAs): 1953525168 (931GiB) 00:34:38.865 UUID: ba44bc86-7b5a-4a3a-bcb3-9ef2ec416139 00:34:38.865 Thin Provisioning: Not Supported 00:34:38.865 Per-NS Atomic Units: Yes 00:34:38.865 Atomic Boundary Size (Normal): 0 00:34:38.865 Atomic Boundary Size (PFail): 0 00:34:38.865 Atomic Boundary Offset: 0 00:34:38.865 NGUID/EUI64 Never Reused: No 00:34:38.865 ANA group ID: 1 00:34:38.865 Namespace Write Protected: No 00:34:38.865 Number of LBA Formats: 1 00:34:38.865 Current LBA Format: LBA Format #00 00:34:38.865 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:38.865 00:34:38.865 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:38.865 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:38.865 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:34:38.865 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:38.865 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:34:38.865 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:38.866 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:38.866 rmmod nvme_tcp 00:34:38.866 rmmod nvme_fabrics 00:34:38.866 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:38.866 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:34:38.866 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:34:38.866 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:38.866 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:38.866 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:38.866 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:38.866 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:34:38.866 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:38.866 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:34:38.866 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:38.866 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:38.866 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:38.866 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:38.866 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:38.866 03:15:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.402 03:15:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:41.402 03:15:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:41.402 03:15:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:41.402 03:15:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:34:41.402 03:15:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:41.402 03:15:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:41.402 03:15:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:41.402 03:15:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:41.402 03:15:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:41.402 03:15:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:41.402 03:15:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:43.940 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:43.940 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:43.940 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:43.940 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:43.941 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:43.941 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:43.941 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:43.941 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:43.941 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:43.941 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:43.941 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:43.941 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:43.941 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:43.941 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:43.941 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:43.941 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:44.879 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:44.879 00:34:44.879 real 0m16.420s 00:34:44.879 user 0m4.441s 00:34:44.879 sys 0m8.461s 00:34:44.879 03:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:44.879 03:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:44.879 ************************************ 00:34:44.879 END TEST nvmf_identify_kernel_target 00:34:44.879 ************************************ 00:34:44.879 03:15:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:44.879 03:15:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:44.879 03:15:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:44.879 03:15:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.879 ************************************ 00:34:44.879 START TEST nvmf_auth_host 00:34:44.879 ************************************ 00:34:44.879 03:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:44.879 * Looking for test storage... 00:34:45.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:45.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.139 --rc genhtml_branch_coverage=1 00:34:45.139 --rc genhtml_function_coverage=1 00:34:45.139 --rc genhtml_legend=1 00:34:45.139 --rc geninfo_all_blocks=1 00:34:45.139 --rc geninfo_unexecuted_blocks=1 00:34:45.139 00:34:45.139 ' 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:45.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.139 --rc genhtml_branch_coverage=1 00:34:45.139 --rc genhtml_function_coverage=1 00:34:45.139 --rc genhtml_legend=1 00:34:45.139 --rc geninfo_all_blocks=1 00:34:45.139 --rc geninfo_unexecuted_blocks=1 00:34:45.139 00:34:45.139 ' 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:45.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.139 --rc genhtml_branch_coverage=1 00:34:45.139 --rc genhtml_function_coverage=1 00:34:45.139 --rc genhtml_legend=1 00:34:45.139 --rc geninfo_all_blocks=1 00:34:45.139 --rc geninfo_unexecuted_blocks=1 00:34:45.139 00:34:45.139 ' 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:45.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.139 --rc genhtml_branch_coverage=1 00:34:45.139 --rc genhtml_function_coverage=1 00:34:45.139 --rc genhtml_legend=1 00:34:45.139 --rc geninfo_all_blocks=1 00:34:45.139 --rc geninfo_unexecuted_blocks=1 00:34:45.139 00:34:45.139 ' 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:45.139 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:45.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:45.140 03:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:51.713 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:51.713 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:51.713 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:51.714 Found net devices under 0000:af:00.0: cvl_0_0 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:51.714 Found net devices under 0000:af:00.1: cvl_0_1 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:51.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:51.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:34:51.714 00:34:51.714 --- 10.0.0.2 ping statistics --- 00:34:51.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:51.714 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:51.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:51.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:34:51.714 00:34:51.714 --- 10.0.0.1 ping statistics --- 00:34:51.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:51.714 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=380079 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 380079 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 380079 ']' 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:51.714 03:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1e8de3a28de2e22aa8cb1a5e55a0d8f6 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.bsm 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1e8de3a28de2e22aa8cb1a5e55a0d8f6 0 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1e8de3a28de2e22aa8cb1a5e55a0d8f6 0 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1e8de3a28de2e22aa8cb1a5e55a0d8f6 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:51.714 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.bsm 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.bsm 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.bsm 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4db1033dbb0fb73696dd0ae9d97e143c0e994badb956e2793d6ca1e1d1ea4bcc 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Q1V 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4db1033dbb0fb73696dd0ae9d97e143c0e994badb956e2793d6ca1e1d1ea4bcc 3 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4db1033dbb0fb73696dd0ae9d97e143c0e994badb956e2793d6ca1e1d1ea4bcc 3 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4db1033dbb0fb73696dd0ae9d97e143c0e994badb956e2793d6ca1e1d1ea4bcc 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Q1V 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Q1V 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Q1V 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=da6da1a7d42c93399fc3aa3e2cc60547b3e741e2821f7ec0 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.nVH 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key da6da1a7d42c93399fc3aa3e2cc60547b3e741e2821f7ec0 0 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 da6da1a7d42c93399fc3aa3e2cc60547b3e741e2821f7ec0 0 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=da6da1a7d42c93399fc3aa3e2cc60547b3e741e2821f7ec0 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.nVH 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.nVH 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.nVH 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1c2d6ee8ac6f7e091366c0a956c2d60405da4025981975ea 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.C30 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1c2d6ee8ac6f7e091366c0a956c2d60405da4025981975ea 2 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1c2d6ee8ac6f7e091366c0a956c2d60405da4025981975ea 2 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1c2d6ee8ac6f7e091366c0a956c2d60405da4025981975ea 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.C30 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.C30 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.C30 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e66c4232fde7aaabe24c17dd53c11725 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.jHr 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e66c4232fde7aaabe24c17dd53c11725 1 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e66c4232fde7aaabe24c17dd53c11725 1 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e66c4232fde7aaabe24c17dd53c11725 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.jHr 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.jHr 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.jHr 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=817e63f2e8dd4e7c0ab4875690889c08 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Yb5 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 817e63f2e8dd4e7c0ab4875690889c08 1 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 817e63f2e8dd4e7c0ab4875690889c08 1 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=817e63f2e8dd4e7c0ab4875690889c08 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Yb5 00:34:51.715 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Yb5 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Yb5 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e74ee721a40d811f7b1c05cbb804619ec396b73a605cc7a5 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.PNB 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e74ee721a40d811f7b1c05cbb804619ec396b73a605cc7a5 2 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e74ee721a40d811f7b1c05cbb804619ec396b73a605cc7a5 2 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e74ee721a40d811f7b1c05cbb804619ec396b73a605cc7a5 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.PNB 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.PNB 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.PNB 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=04ff1fb9d86962fb4b32833c617b4ac2 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.oxY 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 04ff1fb9d86962fb4b32833c617b4ac2 0 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 04ff1fb9d86962fb4b32833c617b4ac2 0 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=04ff1fb9d86962fb4b32833c617b4ac2 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.oxY 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.oxY 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.oxY 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1ea12498c295b3b8af04d1dc4cf13a82941e46760479dbbdd62fe8b9555af524 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.mtG 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1ea12498c295b3b8af04d1dc4cf13a82941e46760479dbbdd62fe8b9555af524 3 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1ea12498c295b3b8af04d1dc4cf13a82941e46760479dbbdd62fe8b9555af524 3 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1ea12498c295b3b8af04d1dc4cf13a82941e46760479dbbdd62fe8b9555af524 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.mtG 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.mtG 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.mtG 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 380079 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 380079 ']' 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:51.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:51.716 03:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.bsm 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Q1V ]] 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Q1V 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.nVH 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.C30 ]] 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.C30 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.jHr 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Yb5 ]] 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Yb5 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.PNB 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.oxY ]] 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.oxY 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.mtG 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.976 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.235 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.235 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:52.235 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:52.235 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:52.235 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.235 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.235 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.235 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.235 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.235 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.235 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.235 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.235 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.235 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.235 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:52.235 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:52.235 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:52.235 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:52.235 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:52.235 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:52.235 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:34:52.236 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:52.236 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:52.236 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:52.236 03:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:54.771 Waiting for block devices as requested 00:34:54.771 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:55.030 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:55.030 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:55.030 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:55.030 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:55.289 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:55.289 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:55.289 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:55.548 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:55.548 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:55.548 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:55.548 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:55.807 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:55.807 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:55.807 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:56.066 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:56.066 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:56.634 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:56.634 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:56.635 No valid GPT data, bailing 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:56.635 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:56.894 00:34:56.894 Discovery Log Number of Records 2, Generation counter 2 00:34:56.894 =====Discovery Log Entry 0====== 00:34:56.894 trtype: tcp 00:34:56.894 adrfam: ipv4 00:34:56.894 subtype: current discovery subsystem 00:34:56.894 treq: not specified, sq flow control disable supported 00:34:56.894 portid: 1 00:34:56.894 trsvcid: 4420 00:34:56.894 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:56.894 traddr: 10.0.0.1 00:34:56.894 eflags: none 00:34:56.894 sectype: none 00:34:56.894 =====Discovery Log Entry 1====== 00:34:56.894 trtype: tcp 00:34:56.894 adrfam: ipv4 00:34:56.894 subtype: nvme subsystem 00:34:56.894 treq: not specified, sq flow control disable supported 00:34:56.894 portid: 1 00:34:56.894 trsvcid: 4420 00:34:56.894 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:56.894 traddr: 10.0.0.1 00:34:56.894 eflags: none 00:34:56.894 sectype: none 00:34:56.894 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:56.894 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:56.894 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:56.894 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:56.894 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.894 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:56.894 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:56.894 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:56.894 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:34:56.894 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:34:56.894 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:56.894 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:56.894 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:34:56.894 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: ]] 00:34:56.894 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:34:56.894 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:56.894 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.895 03:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.154 nvme0n1 00:34:57.154 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.154 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.154 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.154 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.154 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.154 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.154 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.154 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.154 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.154 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.154 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: ]] 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.155 nvme0n1 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.155 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: ]] 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.414 nvme0n1 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.414 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: ]] 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.675 nvme0n1 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: ]] 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.675 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.935 nvme0n1 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.935 03:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.935 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.935 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.935 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:57.935 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.935 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:57.935 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:57.935 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:57.935 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:34:57.935 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:57.935 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.936 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.195 nvme0n1 00:34:58.195 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.195 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.195 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.195 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.195 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.195 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.195 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.195 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.195 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.195 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.195 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.195 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:58.195 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.195 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:58.195 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.195 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:58.195 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:58.195 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:58.195 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:34:58.195 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:34:58.195 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:58.195 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: ]] 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.455 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.714 nvme0n1 00:34:58.714 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.714 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.714 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.714 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.714 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.714 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.714 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.714 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.714 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.714 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.714 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.714 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.714 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: ]] 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.715 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.975 nvme0n1 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: ]] 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.975 03:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.235 nvme0n1 00:34:59.235 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.235 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.235 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.235 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: ]] 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.236 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.496 nvme0n1 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.496 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.497 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:59.497 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.497 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.756 nvme0n1 00:34:59.756 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.756 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.756 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.756 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.756 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.756 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.756 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.756 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.756 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.756 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.756 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.756 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:59.756 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.756 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:59.756 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.756 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:59.756 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:59.756 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:59.756 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:34:59.756 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:34:59.756 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:59.756 03:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:00.324 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:00.324 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: ]] 00:35:00.324 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:00.324 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:00.324 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.324 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:00.324 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:00.324 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:00.324 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.324 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:00.324 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.324 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.324 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.324 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.324 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.324 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.325 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.325 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.325 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.325 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.325 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.325 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.325 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.325 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.325 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:00.325 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.325 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.325 nvme0n1 00:35:00.325 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.325 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.325 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.325 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.325 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.325 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: ]] 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.584 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.844 nvme0n1 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: ]] 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.844 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.845 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:00.845 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.845 03:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.104 nvme0n1 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: ]] 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:01.104 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.105 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:01.105 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.105 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.105 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.105 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.105 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.105 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.105 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.105 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.105 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.105 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.105 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.105 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.105 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.105 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.105 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:01.105 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.105 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.364 nvme0n1 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.364 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.623 nvme0n1 00:35:01.623 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.623 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.623 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.623 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.623 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.623 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.623 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.623 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.623 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.623 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.881 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.881 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:01.881 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.881 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:01.881 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.881 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:01.881 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:01.882 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:01.882 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:01.882 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:01.882 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:01.882 03:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: ]] 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.260 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.519 nvme0n1 00:35:03.519 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.519 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.519 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.519 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.519 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.519 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.519 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.519 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.519 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.519 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.519 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.519 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.519 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:03.519 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.519 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:03.519 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:03.519 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:03.519 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:03.519 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:03.519 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:03.519 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:03.519 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: ]] 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.520 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.779 nvme0n1 00:35:03.779 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.779 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.779 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.779 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.779 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.779 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: ]] 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.038 03:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.298 nvme0n1 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: ]] 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.298 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.866 nvme0n1 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.866 03:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.126 nvme0n1 00:35:05.126 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.126 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.126 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.126 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.126 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.126 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: ]] 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.385 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.952 nvme0n1 00:35:05.952 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.952 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.952 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.952 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.952 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.952 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.952 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.952 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.952 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.952 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: ]] 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.953 03:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.520 nvme0n1 00:35:06.520 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.520 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.520 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.520 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.520 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.520 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.520 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: ]] 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.521 03:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.088 nvme0n1 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: ]] 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:07.088 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:07.089 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.347 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:07.347 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.347 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.347 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.347 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.347 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.347 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.347 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.347 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.347 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.347 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.347 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.347 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.347 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.347 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.347 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:07.347 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.347 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.914 nvme0n1 00:35:07.914 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.914 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.915 03:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.482 nvme0n1 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:08.482 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: ]] 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.483 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.742 nvme0n1 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: ]] 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.742 nvme0n1 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.742 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: ]] 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.001 03:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.001 nvme0n1 00:35:09.001 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.001 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.001 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.001 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.001 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.001 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: ]] 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.260 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.261 nvme0n1 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.261 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.519 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.519 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.519 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.519 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.519 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.519 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.519 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.519 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.519 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.519 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.519 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.519 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.519 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:09.519 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.519 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.520 nvme0n1 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: ]] 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.520 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.778 nvme0n1 00:35:09.778 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.778 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.778 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.778 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.778 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.778 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.778 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.778 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.778 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.778 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.778 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.778 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.778 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:09.778 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.778 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:09.778 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:09.778 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:09.778 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:09.778 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:09.778 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:09.778 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: ]] 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.779 03:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.041 nvme0n1 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: ]] 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.041 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.301 nvme0n1 00:35:10.301 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.301 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.301 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.301 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.301 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.301 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.301 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.301 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.301 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.301 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.301 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.301 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.301 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:10.301 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: ]] 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.302 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.560 nvme0n1 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:10.560 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:10.561 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.561 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:10.561 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:10.561 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:10.561 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.561 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:10.561 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.561 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.561 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.561 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.561 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.561 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.561 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.561 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.561 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.561 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:10.561 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.561 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:10.561 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:10.561 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:10.561 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:10.561 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.561 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.820 nvme0n1 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: ]] 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.820 03:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.079 nvme0n1 00:35:11.079 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.079 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.079 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.079 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.079 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.079 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.079 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.079 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.079 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: ]] 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.080 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.339 nvme0n1 00:35:11.339 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.339 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.339 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.339 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.339 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.339 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.597 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.597 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.597 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.597 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.597 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.597 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.597 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:11.597 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.597 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:11.597 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:11.597 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:11.597 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:11.597 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:11.597 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:11.597 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:11.597 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:11.597 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: ]] 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.598 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.857 nvme0n1 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: ]] 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.857 03:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.116 nvme0n1 00:35:12.116 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.116 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.116 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.116 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.116 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.116 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.116 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.116 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.116 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.116 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.116 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.116 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.116 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:12.116 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.116 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:12.116 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:12.116 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:12.116 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:12.116 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:12.116 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.117 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.376 nvme0n1 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: ]] 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.376 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.945 nvme0n1 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: ]] 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.945 03:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.204 nvme0n1 00:35:13.204 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.204 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.204 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.204 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.204 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.204 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.204 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.204 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.204 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.204 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.463 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.463 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: ]] 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.464 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.723 nvme0n1 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: ]] 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.723 03:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.292 nvme0n1 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.292 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.552 nvme0n1 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: ]] 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.552 03:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.121 nvme0n1 00:35:15.121 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.121 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.121 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.121 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.121 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: ]] 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.380 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.948 nvme0n1 00:35:15.948 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.948 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.948 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.948 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.948 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.948 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.948 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.948 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.948 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.948 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.948 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.948 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: ]] 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.949 03:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.517 nvme0n1 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: ]] 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.517 03:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.085 nvme0n1 00:35:17.085 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.085 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.085 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.085 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.085 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.085 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.344 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.913 nvme0n1 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: ]] 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.913 03:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.913 nvme0n1 00:35:17.913 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.913 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.913 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.913 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.913 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.913 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: ]] 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.173 nvme0n1 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.173 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: ]] 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.433 nvme0n1 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: ]] 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.433 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.692 nvme0n1 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:18.692 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:18.693 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:18.693 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.693 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.693 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:18.693 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.693 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:18.693 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:18.693 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:18.693 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:18.693 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.693 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.952 nvme0n1 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: ]] 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.952 03:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.952 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.952 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:18.952 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:18.952 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:18.952 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.952 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.952 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:18.952 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.952 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:18.952 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:18.952 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:18.952 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:18.952 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.952 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.211 nvme0n1 00:35:19.211 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.211 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.211 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.211 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.211 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.211 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.211 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.211 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: ]] 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.212 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.471 nvme0n1 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: ]] 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.471 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.731 nvme0n1 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: ]] 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.731 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.990 nvme0n1 00:35:19.990 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.990 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.990 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.990 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.990 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.990 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.990 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.990 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.991 03:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.991 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.991 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.991 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:19.991 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:19.991 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:19.991 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.991 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.991 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:19.991 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.991 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:19.991 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:19.991 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:19.991 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:19.991 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.991 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.249 nvme0n1 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: ]] 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.249 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.507 nvme0n1 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: ]] 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.512 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.772 nvme0n1 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: ]] 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.772 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:21.031 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:21.031 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:21.031 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.031 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.031 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:21.031 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.031 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:21.031 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:21.031 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:21.031 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:21.031 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.031 03:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.031 nvme0n1 00:35:21.031 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.031 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.031 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.031 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.031 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.031 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: ]] 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.290 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.549 nvme0n1 00:35:21.549 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.549 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.549 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.549 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.549 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.549 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.549 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.549 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.549 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.549 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.549 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.549 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.549 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:21.549 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.549 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:21.549 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.550 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.809 nvme0n1 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: ]] 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.809 03:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.377 nvme0n1 00:35:22.377 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.377 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.377 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.377 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.377 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.377 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.377 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.377 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.377 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.377 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.377 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.377 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.377 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:22.377 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.377 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:22.377 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:22.377 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: ]] 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.378 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.637 nvme0n1 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: ]] 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.637 03:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.206 nvme0n1 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: ]] 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.206 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.465 nvme0n1 00:35:23.465 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.465 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.465 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.465 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.465 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.465 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.465 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.465 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.465 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.465 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.724 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.983 nvme0n1 00:35:23.983 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.983 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.983 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.983 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.983 03:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.983 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.983 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.983 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.983 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.983 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.983 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.983 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:23.983 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.983 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:23.983 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.983 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:23.983 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:23.983 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:23.983 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:23.983 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:23.983 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:23.983 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU4ZGUzYTI4ZGUyZTIyYWE4Y2IxYTVlNTVhMGQ4ZjapoINs: 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: ]] 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGRiMTAzM2RiYjBmYjczNjk2ZGQwYWU5ZDk3ZTE0M2MwZTk5NGJhZGI5NTZlMjc5M2Q2Y2ExZTFkMWVhNGJjY6QQAGw=: 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.984 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.551 nvme0n1 00:35:24.551 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.551 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.551 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.552 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.552 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.552 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.552 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.552 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.552 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.552 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.552 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.552 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.552 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:24.552 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.552 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:24.552 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:24.552 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:24.810 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:24.810 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:24.810 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:24.810 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:24.810 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:24.810 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: ]] 00:35:24.810 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.811 03:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.378 nvme0n1 00:35:25.378 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: ]] 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.379 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.946 nvme0n1 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc0ZWU3MjFhNDBkODExZjdiMWMwNWNiYjgwNDYxOWVjMzk2YjczYTYwNWNjN2E1AQxXgg==: 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: ]] 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDRmZjFmYjlkODY5NjJmYjRiMzI4MzNjNjE3YjRhYzINpAPX: 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.946 03:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.946 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.946 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.946 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:25.946 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:25.946 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:25.946 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.946 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.946 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:25.946 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.946 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:25.946 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:25.946 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:25.946 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:25.946 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.946 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.513 nvme0n1 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWVhMTI0OThjMjk1YjNiOGFmMDRkMWRjNGNmMTNhODI5NDFlNDY3NjA0NzlkYmJkZDYyZmU4Yjk1NTVhZjUyNH/SgGE=: 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.513 03:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.081 nvme0n1 00:35:27.081 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.081 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.081 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.081 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.081 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.081 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: ]] 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.341 request: 00:35:27.341 { 00:35:27.341 "name": "nvme0", 00:35:27.341 "trtype": "tcp", 00:35:27.341 "traddr": "10.0.0.1", 00:35:27.341 "adrfam": "ipv4", 00:35:27.341 "trsvcid": "4420", 00:35:27.341 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:27.341 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:27.341 "prchk_reftag": false, 00:35:27.341 "prchk_guard": false, 00:35:27.341 "hdgst": false, 00:35:27.341 "ddgst": false, 00:35:27.341 "allow_unrecognized_csi": false, 00:35:27.341 "method": "bdev_nvme_attach_controller", 00:35:27.341 "req_id": 1 00:35:27.341 } 00:35:27.341 Got JSON-RPC error response 00:35:27.341 response: 00:35:27.341 { 00:35:27.341 "code": -5, 00:35:27.341 "message": "Input/output error" 00:35:27.341 } 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.341 request: 00:35:27.341 { 00:35:27.341 "name": "nvme0", 00:35:27.341 "trtype": "tcp", 00:35:27.341 "traddr": "10.0.0.1", 00:35:27.341 "adrfam": "ipv4", 00:35:27.341 "trsvcid": "4420", 00:35:27.341 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:27.341 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:27.341 "prchk_reftag": false, 00:35:27.341 "prchk_guard": false, 00:35:27.341 "hdgst": false, 00:35:27.341 "ddgst": false, 00:35:27.341 "dhchap_key": "key2", 00:35:27.341 "allow_unrecognized_csi": false, 00:35:27.341 "method": "bdev_nvme_attach_controller", 00:35:27.341 "req_id": 1 00:35:27.341 } 00:35:27.341 Got JSON-RPC error response 00:35:27.341 response: 00:35:27.341 { 00:35:27.341 "code": -5, 00:35:27.341 "message": "Input/output error" 00:35:27.341 } 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.341 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.601 request: 00:35:27.601 { 00:35:27.601 "name": "nvme0", 00:35:27.601 "trtype": "tcp", 00:35:27.601 "traddr": "10.0.0.1", 00:35:27.601 "adrfam": "ipv4", 00:35:27.601 "trsvcid": "4420", 00:35:27.601 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:27.601 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:27.601 "prchk_reftag": false, 00:35:27.601 "prchk_guard": false, 00:35:27.601 "hdgst": false, 00:35:27.601 "ddgst": false, 00:35:27.601 "dhchap_key": "key1", 00:35:27.601 "dhchap_ctrlr_key": "ckey2", 00:35:27.601 "allow_unrecognized_csi": false, 00:35:27.601 "method": "bdev_nvme_attach_controller", 00:35:27.601 "req_id": 1 00:35:27.601 } 00:35:27.601 Got JSON-RPC error response 00:35:27.601 response: 00:35:27.601 { 00:35:27.601 "code": -5, 00:35:27.601 "message": "Input/output error" 00:35:27.601 } 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.601 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.861 nvme0n1 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: ]] 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.861 request: 00:35:27.861 { 00:35:27.861 "name": "nvme0", 00:35:27.861 "dhchap_key": "key1", 00:35:27.861 "dhchap_ctrlr_key": "ckey2", 00:35:27.861 "method": "bdev_nvme_set_keys", 00:35:27.861 "req_id": 1 00:35:27.861 } 00:35:27.861 Got JSON-RPC error response 00:35:27.861 response: 00:35:27.861 { 00:35:27.861 "code": -13, 00:35:27.861 "message": "Permission denied" 00:35:27.861 } 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:27.861 03:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:29.239 03:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.239 03:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.239 03:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:29.239 03:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.239 03:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.239 03:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:29.239 03:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE2ZGExYTdkNDJjOTMzOTlmYzNhYTNlMmNjNjA1NDdiM2U3NDFlMjgyMWY3ZWMwf6d1Kg==: 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: ]] 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWMyZDZlZThhYzZmN2UwOTEzNjZjMGE5NTZjMmQ2MDQwNWRhNDAyNTk4MTk3NWVhH2v7XQ==: 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.176 nvme0n1 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTY2YzQyMzJmZGU3YWFhYmUyNGMxN2RkNTNjMTE3MjXJbnTQ: 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: ]] 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODE3ZTYzZjJlOGRkNGU3YzBhYjQ4NzU2OTA4ODljMDg3hZPr: 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:30.176 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:30.177 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.177 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.177 request: 00:35:30.177 { 00:35:30.177 "name": "nvme0", 00:35:30.177 "dhchap_key": "key2", 00:35:30.177 "dhchap_ctrlr_key": "ckey1", 00:35:30.177 "method": "bdev_nvme_set_keys", 00:35:30.177 "req_id": 1 00:35:30.177 } 00:35:30.177 Got JSON-RPC error response 00:35:30.177 response: 00:35:30.177 { 00:35:30.177 "code": -13, 00:35:30.177 "message": "Permission denied" 00:35:30.177 } 00:35:30.177 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:30.177 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:30.177 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:30.177 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:30.177 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:30.177 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.177 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:30.177 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.177 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.177 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.435 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:35:30.435 03:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:31.370 rmmod nvme_tcp 00:35:31.370 rmmod nvme_fabrics 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 380079 ']' 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 380079 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 380079 ']' 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 380079 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 380079 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 380079' 00:35:31.370 killing process with pid 380079 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 380079 00:35:31.370 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 380079 00:35:31.629 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:31.629 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:31.629 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:31.629 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:35:31.629 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:35:31.629 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:35:31.629 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:31.629 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:31.629 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:31.629 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.629 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:31.629 03:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:34.163 03:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:34.164 03:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:34.164 03:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:34.164 03:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:35:34.164 03:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:35:34.164 03:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:35:34.164 03:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:34.164 03:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:34.164 03:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:34.164 03:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:34.164 03:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:34.164 03:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:34.164 03:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:36.699 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:36.699 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:36.699 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:36.699 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:36.699 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:36.699 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:36.699 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:36.699 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:36.699 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:36.699 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:36.699 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:36.699 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:36.699 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:36.699 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:36.699 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:36.699 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:37.637 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:37.637 03:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.bsm /tmp/spdk.key-null.nVH /tmp/spdk.key-sha256.jHr /tmp/spdk.key-sha384.PNB /tmp/spdk.key-sha512.mtG /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:35:37.637 03:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:40.172 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:35:40.172 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:40.172 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:35:40.172 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:35:40.172 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:35:40.172 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:35:40.172 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:35:40.172 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:35:40.172 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:35:40.172 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:35:40.172 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:35:40.172 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:35:40.172 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:35:40.172 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:35:40.172 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:35:40.172 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:35:40.172 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:35:40.430 00:35:40.430 real 0m55.469s 00:35:40.430 user 0m50.397s 00:35:40.430 sys 0m12.482s 00:35:40.430 03:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:40.430 03:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.430 ************************************ 00:35:40.430 END TEST nvmf_auth_host 00:35:40.430 ************************************ 00:35:40.430 03:16:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:35:40.430 03:16:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:40.430 03:16:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:40.430 03:16:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:40.431 03:16:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.431 ************************************ 00:35:40.431 START TEST nvmf_digest 00:35:40.431 ************************************ 00:35:40.431 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:40.431 * Looking for test storage... 00:35:40.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:40.431 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:40.431 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:35:40.431 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:40.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.696 --rc genhtml_branch_coverage=1 00:35:40.696 --rc genhtml_function_coverage=1 00:35:40.696 --rc genhtml_legend=1 00:35:40.696 --rc geninfo_all_blocks=1 00:35:40.696 --rc geninfo_unexecuted_blocks=1 00:35:40.696 00:35:40.696 ' 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:40.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.696 --rc genhtml_branch_coverage=1 00:35:40.696 --rc genhtml_function_coverage=1 00:35:40.696 --rc genhtml_legend=1 00:35:40.696 --rc geninfo_all_blocks=1 00:35:40.696 --rc geninfo_unexecuted_blocks=1 00:35:40.696 00:35:40.696 ' 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:40.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.696 --rc genhtml_branch_coverage=1 00:35:40.696 --rc genhtml_function_coverage=1 00:35:40.696 --rc genhtml_legend=1 00:35:40.696 --rc geninfo_all_blocks=1 00:35:40.696 --rc geninfo_unexecuted_blocks=1 00:35:40.696 00:35:40.696 ' 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:40.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.696 --rc genhtml_branch_coverage=1 00:35:40.696 --rc genhtml_function_coverage=1 00:35:40.696 --rc genhtml_legend=1 00:35:40.696 --rc geninfo_all_blocks=1 00:35:40.696 --rc geninfo_unexecuted_blocks=1 00:35:40.696 00:35:40.696 ' 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:40.696 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:40.697 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:40.697 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:40.697 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:35:40.697 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:40.697 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:40.697 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:40.697 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.697 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.697 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.697 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:35:40.697 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.697 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:35:40.697 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:40.697 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:40.697 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:40.697 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:40.698 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:40.698 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:40.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:40.698 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:40.698 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:40.698 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:40.698 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:40.698 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:35:40.698 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:35:40.698 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:35:40.698 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:35:40.698 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:40.698 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:40.698 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:40.698 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:40.698 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:40.698 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:40.698 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:40.698 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:40.698 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:40.698 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:40.698 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:35:40.698 03:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:47.269 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:47.269 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:35:47.269 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:47.269 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:47.269 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:47.269 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:47.269 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:47.269 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:35:47.269 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:47.269 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:35:47.269 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:35:47.269 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:47.270 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:47.270 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:47.270 Found net devices under 0000:af:00.0: cvl_0_0 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:47.270 Found net devices under 0000:af:00.1: cvl_0_1 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:47.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:47.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:35:47.270 00:35:47.270 --- 10.0.0.2 ping statistics --- 00:35:47.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:47.270 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:47.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:47.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:35:47.270 00:35:47.270 --- 10.0.0.1 ping statistics --- 00:35:47.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:47.270 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:47.270 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:47.271 ************************************ 00:35:47.271 START TEST nvmf_digest_clean 00:35:47.271 ************************************ 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=385929 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 385929 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 385929 ']' 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:47.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:47.271 [2024-12-14 03:17:01.622327] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:47.271 [2024-12-14 03:17:01.622373] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:47.271 [2024-12-14 03:17:01.698009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:47.271 [2024-12-14 03:17:01.727371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:47.271 [2024-12-14 03:17:01.727413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:47.271 [2024-12-14 03:17:01.727425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:47.271 [2024-12-14 03:17:01.727435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:47.271 [2024-12-14 03:17:01.727444] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:47.271 [2024-12-14 03:17:01.728072] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:47.271 null0 00:35:47.271 [2024-12-14 03:17:01.938782] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:47.271 [2024-12-14 03:17:01.962961] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=385957 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 385957 /var/tmp/bperf.sock 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 385957 ']' 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:47.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:47.271 03:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:47.271 [2024-12-14 03:17:02.017819] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:47.271 [2024-12-14 03:17:02.017860] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid385957 ] 00:35:47.271 [2024-12-14 03:17:02.092672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:47.271 [2024-12-14 03:17:02.114884] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:47.271 03:17:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:47.271 03:17:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:47.271 03:17:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:47.271 03:17:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:47.271 03:17:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:47.530 03:17:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:47.530 03:17:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:47.790 nvme0n1 00:35:47.790 03:17:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:47.790 03:17:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:47.790 Running I/O for 2 seconds... 00:35:49.665 24709.00 IOPS, 96.52 MiB/s [2024-12-14T02:17:05.057Z] 25074.00 IOPS, 97.95 MiB/s 00:35:49.924 Latency(us) 00:35:49.924 [2024-12-14T02:17:05.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:49.924 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:49.924 nvme0n1 : 2.04 24593.19 96.07 0.00 0.00 5097.06 2512.21 44439.65 00:35:49.924 [2024-12-14T02:17:05.057Z] =================================================================================================================== 00:35:49.924 [2024-12-14T02:17:05.057Z] Total : 24593.19 96.07 0.00 0.00 5097.06 2512.21 44439.65 00:35:49.924 { 00:35:49.924 "results": [ 00:35:49.924 { 00:35:49.924 "job": "nvme0n1", 00:35:49.924 "core_mask": "0x2", 00:35:49.924 "workload": "randread", 00:35:49.924 "status": "finished", 00:35:49.924 "queue_depth": 128, 00:35:49.924 "io_size": 4096, 00:35:49.924 "runtime": 2.044306, 00:35:49.924 "iops": 24593.18712560644, 00:35:49.924 "mibps": 96.06713720940016, 00:35:49.924 "io_failed": 0, 00:35:49.924 "io_timeout": 0, 00:35:49.924 "avg_latency_us": 5097.06262346135, 00:35:49.924 "min_latency_us": 2512.213333333333, 00:35:49.924 "max_latency_us": 44439.64952380952 00:35:49.924 } 00:35:49.924 ], 00:35:49.924 "core_count": 1 00:35:49.924 } 00:35:49.924 03:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:49.924 03:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:49.924 03:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:49.924 03:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:49.924 | select(.opcode=="crc32c") 00:35:49.924 | "\(.module_name) \(.executed)"' 00:35:49.924 03:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:49.924 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:49.924 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:49.924 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:49.924 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:49.924 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 385957 00:35:49.924 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 385957 ']' 00:35:49.924 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 385957 00:35:49.924 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:49.924 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:49.924 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 385957 00:35:50.182 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:50.182 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:50.182 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 385957' 00:35:50.182 killing process with pid 385957 00:35:50.182 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 385957 00:35:50.182 Received shutdown signal, test time was about 2.000000 seconds 00:35:50.182 00:35:50.182 Latency(us) 00:35:50.182 [2024-12-14T02:17:05.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:50.182 [2024-12-14T02:17:05.315Z] =================================================================================================================== 00:35:50.182 [2024-12-14T02:17:05.315Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:50.182 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 385957 00:35:50.182 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:50.182 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:50.182 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:50.182 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:50.182 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:50.182 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:50.182 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:50.182 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=386006 00:35:50.182 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 386006 /var/tmp/bperf.sock 00:35:50.182 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:50.182 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 386006 ']' 00:35:50.182 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:50.182 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:50.182 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:50.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:50.182 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:50.182 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:50.182 [2024-12-14 03:17:05.283186] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:50.182 [2024-12-14 03:17:05.283232] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid386006 ] 00:35:50.182 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:50.182 Zero copy mechanism will not be used. 00:35:50.440 [2024-12-14 03:17:05.356635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:50.440 [2024-12-14 03:17:05.377669] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:50.440 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:50.440 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:50.440 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:50.440 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:50.440 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:50.698 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:50.698 03:17:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:50.957 nvme0n1 00:35:50.957 03:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:50.957 03:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:51.215 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:51.215 Zero copy mechanism will not be used. 00:35:51.215 Running I/O for 2 seconds... 00:35:53.088 5648.00 IOPS, 706.00 MiB/s [2024-12-14T02:17:08.222Z] 5795.00 IOPS, 724.38 MiB/s 00:35:53.089 Latency(us) 00:35:53.089 [2024-12-14T02:17:08.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:53.089 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:53.089 nvme0n1 : 2.00 5793.35 724.17 0.00 0.00 2759.02 604.65 6584.81 00:35:53.089 [2024-12-14T02:17:08.222Z] =================================================================================================================== 00:35:53.089 [2024-12-14T02:17:08.222Z] Total : 5793.35 724.17 0.00 0.00 2759.02 604.65 6584.81 00:35:53.089 { 00:35:53.089 "results": [ 00:35:53.089 { 00:35:53.089 "job": "nvme0n1", 00:35:53.089 "core_mask": "0x2", 00:35:53.089 "workload": "randread", 00:35:53.089 "status": "finished", 00:35:53.089 "queue_depth": 16, 00:35:53.089 "io_size": 131072, 00:35:53.089 "runtime": 2.003333, 00:35:53.089 "iops": 5793.3453899077185, 00:35:53.089 "mibps": 724.1681737384648, 00:35:53.089 "io_failed": 0, 00:35:53.089 "io_timeout": 0, 00:35:53.089 "avg_latency_us": 2759.0181472637305, 00:35:53.089 "min_latency_us": 604.6476190476191, 00:35:53.089 "max_latency_us": 6584.8076190476195 00:35:53.089 } 00:35:53.089 ], 00:35:53.089 "core_count": 1 00:35:53.089 } 00:35:53.089 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:53.089 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:53.089 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:53.089 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:53.089 | select(.opcode=="crc32c") 00:35:53.089 | "\(.module_name) \(.executed)"' 00:35:53.089 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:53.348 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:53.348 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:53.348 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:53.348 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:53.348 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 386006 00:35:53.348 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 386006 ']' 00:35:53.348 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 386006 00:35:53.348 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:53.348 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:53.348 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 386006 00:35:53.348 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:53.348 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:53.348 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 386006' 00:35:53.348 killing process with pid 386006 00:35:53.348 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 386006 00:35:53.348 Received shutdown signal, test time was about 2.000000 seconds 00:35:53.348 00:35:53.348 Latency(us) 00:35:53.348 [2024-12-14T02:17:08.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:53.348 [2024-12-14T02:17:08.481Z] =================================================================================================================== 00:35:53.348 [2024-12-14T02:17:08.481Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:53.348 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 386006 00:35:53.607 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:53.607 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:53.607 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:53.607 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:53.607 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:53.607 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:53.607 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:53.607 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=386056 00:35:53.607 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 386056 /var/tmp/bperf.sock 00:35:53.607 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:53.607 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 386056 ']' 00:35:53.607 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:53.607 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:53.607 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:53.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:53.607 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:53.607 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:53.607 [2024-12-14 03:17:08.617657] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:53.607 [2024-12-14 03:17:08.617708] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid386056 ] 00:35:53.607 [2024-12-14 03:17:08.693425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:53.607 [2024-12-14 03:17:08.712905] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:53.866 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:53.866 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:53.866 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:53.866 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:53.866 03:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:54.126 03:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:54.126 03:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:54.384 nvme0n1 00:35:54.384 03:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:54.384 03:17:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:54.384 Running I/O for 2 seconds... 00:35:56.257 27574.00 IOPS, 107.71 MiB/s [2024-12-14T02:17:11.390Z] 27731.00 IOPS, 108.32 MiB/s 00:35:56.257 Latency(us) 00:35:56.257 [2024-12-14T02:17:11.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.257 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:56.257 nvme0n1 : 2.01 27732.34 108.33 0.00 0.00 4609.25 3401.63 12108.56 00:35:56.257 [2024-12-14T02:17:11.390Z] =================================================================================================================== 00:35:56.257 [2024-12-14T02:17:11.390Z] Total : 27732.34 108.33 0.00 0.00 4609.25 3401.63 12108.56 00:35:56.257 { 00:35:56.257 "results": [ 00:35:56.257 { 00:35:56.257 "job": "nvme0n1", 00:35:56.257 "core_mask": "0x2", 00:35:56.257 "workload": "randwrite", 00:35:56.257 "status": "finished", 00:35:56.257 "queue_depth": 128, 00:35:56.257 "io_size": 4096, 00:35:56.257 "runtime": 2.005673, 00:35:56.257 "iops": 27732.33722545998, 00:35:56.257 "mibps": 108.32944228695305, 00:35:56.257 "io_failed": 0, 00:35:56.257 "io_timeout": 0, 00:35:56.257 "avg_latency_us": 4609.247565557307, 00:35:56.257 "min_latency_us": 3401.630476190476, 00:35:56.257 "max_latency_us": 12108.55619047619 00:35:56.257 } 00:35:56.257 ], 00:35:56.257 "core_count": 1 00:35:56.257 } 00:35:56.517 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:56.517 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:56.517 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:56.517 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:56.517 | select(.opcode=="crc32c") 00:35:56.517 | "\(.module_name) \(.executed)"' 00:35:56.517 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:56.517 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:56.517 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:56.517 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:56.517 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:56.517 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 386056 00:35:56.517 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 386056 ']' 00:35:56.517 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 386056 00:35:56.517 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:56.517 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:56.517 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 386056 00:35:56.775 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:56.775 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:56.775 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 386056' 00:35:56.775 killing process with pid 386056 00:35:56.775 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 386056 00:35:56.775 Received shutdown signal, test time was about 2.000000 seconds 00:35:56.775 00:35:56.775 Latency(us) 00:35:56.775 [2024-12-14T02:17:11.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.775 [2024-12-14T02:17:11.908Z] =================================================================================================================== 00:35:56.775 [2024-12-14T02:17:11.908Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:56.775 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 386056 00:35:56.775 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:56.775 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:56.775 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:56.775 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:56.775 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:56.775 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:56.775 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:56.775 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=386108 00:35:56.775 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 386108 /var/tmp/bperf.sock 00:35:56.775 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:56.775 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 386108 ']' 00:35:56.775 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:56.775 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:56.775 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:56.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:56.775 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:56.775 03:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:56.776 [2024-12-14 03:17:11.853174] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:56.776 [2024-12-14 03:17:11.853220] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid386108 ] 00:35:56.776 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:56.776 Zero copy mechanism will not be used. 00:35:57.034 [2024-12-14 03:17:11.927568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.034 [2024-12-14 03:17:11.949704] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:57.034 03:17:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:57.034 03:17:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:57.034 03:17:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:57.034 03:17:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:57.034 03:17:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:57.293 03:17:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:57.293 03:17:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:57.551 nvme0n1 00:35:57.551 03:17:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:57.551 03:17:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:57.810 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:57.810 Zero copy mechanism will not be used. 00:35:57.810 Running I/O for 2 seconds... 00:35:59.680 6727.00 IOPS, 840.88 MiB/s [2024-12-14T02:17:14.813Z] 6815.00 IOPS, 851.88 MiB/s 00:35:59.680 Latency(us) 00:35:59.680 [2024-12-14T02:17:14.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:59.680 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:59.680 nvme0n1 : 2.00 6813.51 851.69 0.00 0.00 2344.37 1685.21 9362.29 00:35:59.680 [2024-12-14T02:17:14.813Z] =================================================================================================================== 00:35:59.680 [2024-12-14T02:17:14.813Z] Total : 6813.51 851.69 0.00 0.00 2344.37 1685.21 9362.29 00:35:59.680 { 00:35:59.680 "results": [ 00:35:59.680 { 00:35:59.680 "job": "nvme0n1", 00:35:59.680 "core_mask": "0x2", 00:35:59.680 "workload": "randwrite", 00:35:59.680 "status": "finished", 00:35:59.680 "queue_depth": 16, 00:35:59.680 "io_size": 131072, 00:35:59.680 "runtime": 2.003374, 00:35:59.680 "iops": 6813.5056160257645, 00:35:59.680 "mibps": 851.6882020032206, 00:35:59.680 "io_failed": 0, 00:35:59.680 "io_timeout": 0, 00:35:59.680 "avg_latency_us": 2344.3720707482994, 00:35:59.680 "min_latency_us": 1685.2114285714285, 00:35:59.680 "max_latency_us": 9362.285714285714 00:35:59.680 } 00:35:59.680 ], 00:35:59.680 "core_count": 1 00:35:59.680 } 00:35:59.680 03:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:59.680 03:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:59.680 03:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:59.680 03:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:59.680 | select(.opcode=="crc32c") 00:35:59.680 | "\(.module_name) \(.executed)"' 00:35:59.680 03:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:59.939 03:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:59.939 03:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:59.939 03:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:59.939 03:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:59.939 03:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 386108 00:35:59.939 03:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 386108 ']' 00:35:59.939 03:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 386108 00:35:59.939 03:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:59.939 03:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:59.939 03:17:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 386108 00:35:59.939 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:59.939 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:59.939 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 386108' 00:35:59.939 killing process with pid 386108 00:35:59.939 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 386108 00:35:59.939 Received shutdown signal, test time was about 2.000000 seconds 00:35:59.939 00:35:59.939 Latency(us) 00:35:59.939 [2024-12-14T02:17:15.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:59.939 [2024-12-14T02:17:15.072Z] =================================================================================================================== 00:35:59.939 [2024-12-14T02:17:15.072Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:59.939 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 386108 00:36:00.198 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 385929 00:36:00.198 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 385929 ']' 00:36:00.198 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 385929 00:36:00.198 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:00.198 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:00.198 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 385929 00:36:00.198 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:00.198 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:00.198 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 385929' 00:36:00.198 killing process with pid 385929 00:36:00.198 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 385929 00:36:00.198 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 385929 00:36:00.456 00:36:00.456 real 0m13.811s 00:36:00.456 user 0m26.497s 00:36:00.456 sys 0m4.507s 00:36:00.456 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:00.456 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:00.456 ************************************ 00:36:00.456 END TEST nvmf_digest_clean 00:36:00.456 ************************************ 00:36:00.456 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:00.456 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:00.456 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:00.456 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:00.456 ************************************ 00:36:00.456 START TEST nvmf_digest_error 00:36:00.456 ************************************ 00:36:00.456 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:36:00.456 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:00.456 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:00.456 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:00.456 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:00.456 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=386194 00:36:00.456 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 386194 00:36:00.457 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:00.457 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 386194 ']' 00:36:00.457 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:00.457 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:00.457 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:00.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:00.457 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:00.457 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:00.457 [2024-12-14 03:17:15.503120] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:00.457 [2024-12-14 03:17:15.503168] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:00.457 [2024-12-14 03:17:15.582859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:00.716 [2024-12-14 03:17:15.604212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:00.716 [2024-12-14 03:17:15.604245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:00.716 [2024-12-14 03:17:15.604252] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:00.716 [2024-12-14 03:17:15.604258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:00.716 [2024-12-14 03:17:15.604263] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:00.716 [2024-12-14 03:17:15.604788] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:00.716 [2024-12-14 03:17:15.693265] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:00.716 null0 00:36:00.716 [2024-12-14 03:17:15.783936] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:00.716 [2024-12-14 03:17:15.808120] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=386219 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 386219 /var/tmp/bperf.sock 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 386219 ']' 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:00.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:00.716 03:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:00.975 [2024-12-14 03:17:15.861705] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:00.975 [2024-12-14 03:17:15.861745] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid386219 ] 00:36:00.975 [2024-12-14 03:17:15.936695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:00.975 [2024-12-14 03:17:15.958808] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:00.975 03:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:00.975 03:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:00.975 03:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:00.975 03:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:01.234 03:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:01.234 03:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.234 03:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:01.234 03:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.234 03:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:01.234 03:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:01.492 nvme0n1 00:36:01.492 03:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:01.492 03:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.492 03:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:01.492 03:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.492 03:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:01.492 03:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:01.751 Running I/O for 2 seconds... 00:36:01.751 [2024-12-14 03:17:16.660231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.660262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.660272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.751 [2024-12-14 03:17:16.670213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.670236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.670245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.751 [2024-12-14 03:17:16.680145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.680166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.680175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.751 [2024-12-14 03:17:16.688728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.688748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.688756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.751 [2024-12-14 03:17:16.697381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.697401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.697409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.751 [2024-12-14 03:17:16.707055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.707076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.707084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.751 [2024-12-14 03:17:16.716146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.716166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.716175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.751 [2024-12-14 03:17:16.725027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.725047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.725054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.751 [2024-12-14 03:17:16.734276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.734296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.734308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.751 [2024-12-14 03:17:16.743131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.743151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.743159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.751 [2024-12-14 03:17:16.753682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.753703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.753711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.751 [2024-12-14 03:17:16.762041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.762061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.762070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.751 [2024-12-14 03:17:16.770803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.770823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.770832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.751 [2024-12-14 03:17:16.779697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.779716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.779725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.751 [2024-12-14 03:17:16.789676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.789696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.789704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.751 [2024-12-14 03:17:16.799301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.799326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.799334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.751 [2024-12-14 03:17:16.808219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.808239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.808247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.751 [2024-12-14 03:17:16.816959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.816981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.816989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.751 [2024-12-14 03:17:16.826212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.826232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.826240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.751 [2024-12-14 03:17:16.834559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.834579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.834587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.751 [2024-12-14 03:17:16.844900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.844920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.844928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.751 [2024-12-14 03:17:16.855779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.855799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.855807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.751 [2024-12-14 03:17:16.863792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.863811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.863819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.751 [2024-12-14 03:17:16.873838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:01.751 [2024-12-14 03:17:16.873859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.751 [2024-12-14 03:17:16.873867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.010 [2024-12-14 03:17:16.883929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.010 [2024-12-14 03:17:16.883950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-14 03:17:16.883958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.010 [2024-12-14 03:17:16.892597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.010 [2024-12-14 03:17:16.892615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-14 03:17:16.892624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.010 [2024-12-14 03:17:16.901264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.010 [2024-12-14 03:17:16.901284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-14 03:17:16.901292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.010 [2024-12-14 03:17:16.910507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.010 [2024-12-14 03:17:16.910527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-14 03:17:16.910535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.010 [2024-12-14 03:17:16.921110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.010 [2024-12-14 03:17:16.921130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-14 03:17:16.921139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.010 [2024-12-14 03:17:16.929957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.010 [2024-12-14 03:17:16.929976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-14 03:17:16.929984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.010 [2024-12-14 03:17:16.939596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.010 [2024-12-14 03:17:16.939615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-14 03:17:16.939624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.010 [2024-12-14 03:17:16.947424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.010 [2024-12-14 03:17:16.947444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-14 03:17:16.947453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.010 [2024-12-14 03:17:16.957741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.010 [2024-12-14 03:17:16.957761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-14 03:17:16.957771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.010 [2024-12-14 03:17:16.965564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.010 [2024-12-14 03:17:16.965584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-14 03:17:16.965591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.011 [2024-12-14 03:17:16.976833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.011 [2024-12-14 03:17:16.976859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-14 03:17:16.976867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.011 [2024-12-14 03:17:16.986676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.011 [2024-12-14 03:17:16.986696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-14 03:17:16.986704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.011 [2024-12-14 03:17:16.995636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.011 [2024-12-14 03:17:16.995657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-14 03:17:16.995664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.011 [2024-12-14 03:17:17.004411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.011 [2024-12-14 03:17:17.004431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-14 03:17:17.004440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.011 [2024-12-14 03:17:17.014079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.011 [2024-12-14 03:17:17.014099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-14 03:17:17.014108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.011 [2024-12-14 03:17:17.022626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.011 [2024-12-14 03:17:17.022646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-14 03:17:17.022655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.011 [2024-12-14 03:17:17.032519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.011 [2024-12-14 03:17:17.032539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-14 03:17:17.032547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.011 [2024-12-14 03:17:17.041198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.011 [2024-12-14 03:17:17.041218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-14 03:17:17.041227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.011 [2024-12-14 03:17:17.051271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.011 [2024-12-14 03:17:17.051290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-14 03:17:17.051298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.011 [2024-12-14 03:17:17.059663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.011 [2024-12-14 03:17:17.059682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-14 03:17:17.059690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.011 [2024-12-14 03:17:17.069694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.011 [2024-12-14 03:17:17.069714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-14 03:17:17.069722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.011 [2024-12-14 03:17:17.078674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.011 [2024-12-14 03:17:17.078695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-14 03:17:17.078703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.011 [2024-12-14 03:17:17.088507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.011 [2024-12-14 03:17:17.088527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-14 03:17:17.088536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.011 [2024-12-14 03:17:17.096653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.011 [2024-12-14 03:17:17.096673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-14 03:17:17.096681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.011 [2024-12-14 03:17:17.107450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.011 [2024-12-14 03:17:17.107472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-14 03:17:17.107480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.011 [2024-12-14 03:17:17.118187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.011 [2024-12-14 03:17:17.118208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-14 03:17:17.118216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.011 [2024-12-14 03:17:17.129535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.011 [2024-12-14 03:17:17.129555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-14 03:17:17.129563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.270 [2024-12-14 03:17:17.142955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.270 [2024-12-14 03:17:17.142976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-14 03:17:17.142987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.270 [2024-12-14 03:17:17.153211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.270 [2024-12-14 03:17:17.153231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-14 03:17:17.153239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.270 [2024-12-14 03:17:17.161618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.270 [2024-12-14 03:17:17.161638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-14 03:17:17.161646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.270 [2024-12-14 03:17:17.173283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.270 [2024-12-14 03:17:17.173304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.270 [2024-12-14 03:17:17.173316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.270 [2024-12-14 03:17:17.182120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.270 [2024-12-14 03:17:17.182139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-14 03:17:17.182147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.271 [2024-12-14 03:17:17.190605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.271 [2024-12-14 03:17:17.190624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-14 03:17:17.190632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.271 [2024-12-14 03:17:17.199971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.271 [2024-12-14 03:17:17.199990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-14 03:17:17.199998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.271 [2024-12-14 03:17:17.209922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.271 [2024-12-14 03:17:17.209943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-14 03:17:17.209951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.271 [2024-12-14 03:17:17.217872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.271 [2024-12-14 03:17:17.217893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-14 03:17:17.217901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.271 [2024-12-14 03:17:17.227130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.271 [2024-12-14 03:17:17.227154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-14 03:17:17.227163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.271 [2024-12-14 03:17:17.237821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.271 [2024-12-14 03:17:17.237842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-14 03:17:17.237850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.271 [2024-12-14 03:17:17.247862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.271 [2024-12-14 03:17:17.247882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-14 03:17:17.247891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.271 [2024-12-14 03:17:17.255817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.271 [2024-12-14 03:17:17.255837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-14 03:17:17.255845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.271 [2024-12-14 03:17:17.265773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.271 [2024-12-14 03:17:17.265794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-14 03:17:17.265801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.271 [2024-12-14 03:17:17.274989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.271 [2024-12-14 03:17:17.275009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-14 03:17:17.275017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.271 [2024-12-14 03:17:17.283185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.271 [2024-12-14 03:17:17.283206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-14 03:17:17.283214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.271 [2024-12-14 03:17:17.292196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.271 [2024-12-14 03:17:17.292218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-14 03:17:17.292227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.271 [2024-12-14 03:17:17.302867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.271 [2024-12-14 03:17:17.302889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-14 03:17:17.302897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.271 [2024-12-14 03:17:17.313831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.271 [2024-12-14 03:17:17.313852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-14 03:17:17.313861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.271 [2024-12-14 03:17:17.322280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.271 [2024-12-14 03:17:17.322301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-14 03:17:17.322309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.271 [2024-12-14 03:17:17.331964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.271 [2024-12-14 03:17:17.331984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-14 03:17:17.331992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.271 [2024-12-14 03:17:17.343225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.271 [2024-12-14 03:17:17.343246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-14 03:17:17.343254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.271 [2024-12-14 03:17:17.354043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.271 [2024-12-14 03:17:17.354063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-14 03:17:17.354071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.271 [2024-12-14 03:17:17.362902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.271 [2024-12-14 03:17:17.362922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-14 03:17:17.362930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.271 [2024-12-14 03:17:17.374614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.271 [2024-12-14 03:17:17.374634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-14 03:17:17.374642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.271 [2024-12-14 03:17:17.384551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.271 [2024-12-14 03:17:17.384570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-14 03:17:17.384579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.271 [2024-12-14 03:17:17.393006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.271 [2024-12-14 03:17:17.393029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.271 [2024-12-14 03:17:17.393037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.531 [2024-12-14 03:17:17.405874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.531 [2024-12-14 03:17:17.405897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.531 [2024-12-14 03:17:17.405907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.531 [2024-12-14 03:17:17.417681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.531 [2024-12-14 03:17:17.417701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.531 [2024-12-14 03:17:17.417709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.531 [2024-12-14 03:17:17.430032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.531 [2024-12-14 03:17:17.430052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.531 [2024-12-14 03:17:17.430061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.531 [2024-12-14 03:17:17.438842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.531 [2024-12-14 03:17:17.438863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.531 [2024-12-14 03:17:17.438871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.531 [2024-12-14 03:17:17.450416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.531 [2024-12-14 03:17:17.450436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.531 [2024-12-14 03:17:17.450444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.531 [2024-12-14 03:17:17.461613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.531 [2024-12-14 03:17:17.461633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.531 [2024-12-14 03:17:17.461641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.531 [2024-12-14 03:17:17.473909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.531 [2024-12-14 03:17:17.473929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.531 [2024-12-14 03:17:17.473937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.531 [2024-12-14 03:17:17.485799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.531 [2024-12-14 03:17:17.485819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.531 [2024-12-14 03:17:17.485827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.531 [2024-12-14 03:17:17.495650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.531 [2024-12-14 03:17:17.495669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.531 [2024-12-14 03:17:17.495677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.531 [2024-12-14 03:17:17.504593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.531 [2024-12-14 03:17:17.504613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.531 [2024-12-14 03:17:17.504621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.531 [2024-12-14 03:17:17.514552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.531 [2024-12-14 03:17:17.514572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.531 [2024-12-14 03:17:17.514580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.531 [2024-12-14 03:17:17.526053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.531 [2024-12-14 03:17:17.526074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.531 [2024-12-14 03:17:17.526082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.531 [2024-12-14 03:17:17.536805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.531 [2024-12-14 03:17:17.536825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.531 [2024-12-14 03:17:17.536833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.531 [2024-12-14 03:17:17.547076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.531 [2024-12-14 03:17:17.547096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.531 [2024-12-14 03:17:17.547104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.531 [2024-12-14 03:17:17.558447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.531 [2024-12-14 03:17:17.558467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.531 [2024-12-14 03:17:17.558475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.531 [2024-12-14 03:17:17.570809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.531 [2024-12-14 03:17:17.570829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.531 [2024-12-14 03:17:17.570837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.531 [2024-12-14 03:17:17.581914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.531 [2024-12-14 03:17:17.581934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.531 [2024-12-14 03:17:17.581945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.531 [2024-12-14 03:17:17.590925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.531 [2024-12-14 03:17:17.590945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.531 [2024-12-14 03:17:17.590953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.531 [2024-12-14 03:17:17.602859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.531 [2024-12-14 03:17:17.602879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.531 [2024-12-14 03:17:17.602887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.531 [2024-12-14 03:17:17.615815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.531 [2024-12-14 03:17:17.615836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.531 [2024-12-14 03:17:17.615845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.531 [2024-12-14 03:17:17.628013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.531 [2024-12-14 03:17:17.628033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.531 [2024-12-14 03:17:17.628041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.531 [2024-12-14 03:17:17.637634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.531 [2024-12-14 03:17:17.637654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.531 [2024-12-14 03:17:17.637662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.531 25674.00 IOPS, 100.29 MiB/s [2024-12-14T02:17:17.664Z] [2024-12-14 03:17:17.647338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.531 [2024-12-14 03:17:17.647359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.531 [2024-12-14 03:17:17.647367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.532 [2024-12-14 03:17:17.658610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.532 [2024-12-14 03:17:17.658631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.532 [2024-12-14 03:17:17.658640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.790 [2024-12-14 03:17:17.667813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.790 [2024-12-14 03:17:17.667833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.790 [2024-12-14 03:17:17.667841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.790 [2024-12-14 03:17:17.679200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.790 [2024-12-14 03:17:17.679223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.790 [2024-12-14 03:17:17.679231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.790 [2024-12-14 03:17:17.687517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.790 [2024-12-14 03:17:17.687537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.790 [2024-12-14 03:17:17.687546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.790 [2024-12-14 03:17:17.697323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.790 [2024-12-14 03:17:17.697343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.790 [2024-12-14 03:17:17.697351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.790 [2024-12-14 03:17:17.707937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.791 [2024-12-14 03:17:17.707956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.791 [2024-12-14 03:17:17.707965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.791 [2024-12-14 03:17:17.716173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.791 [2024-12-14 03:17:17.716192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.791 [2024-12-14 03:17:17.716200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.791 [2024-12-14 03:17:17.725224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.791 [2024-12-14 03:17:17.725244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.791 [2024-12-14 03:17:17.725252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.791 [2024-12-14 03:17:17.734318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.791 [2024-12-14 03:17:17.734339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.791 [2024-12-14 03:17:17.734347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.791 [2024-12-14 03:17:17.744007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.791 [2024-12-14 03:17:17.744028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.791 [2024-12-14 03:17:17.744036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.791 [2024-12-14 03:17:17.753697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.791 [2024-12-14 03:17:17.753717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.791 [2024-12-14 03:17:17.753728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.791 [2024-12-14 03:17:17.762801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.791 [2024-12-14 03:17:17.762821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.791 [2024-12-14 03:17:17.762828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.791 [2024-12-14 03:17:17.772864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.791 [2024-12-14 03:17:17.772883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.791 [2024-12-14 03:17:17.772892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.791 [2024-12-14 03:17:17.780980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.791 [2024-12-14 03:17:17.781000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.791 [2024-12-14 03:17:17.781008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.791 [2024-12-14 03:17:17.791733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.791 [2024-12-14 03:17:17.791754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.791 [2024-12-14 03:17:17.791761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.791 [2024-12-14 03:17:17.803213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.791 [2024-12-14 03:17:17.803232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.791 [2024-12-14 03:17:17.803240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.791 [2024-12-14 03:17:17.812369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.791 [2024-12-14 03:17:17.812389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.791 [2024-12-14 03:17:17.812396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.791 [2024-12-14 03:17:17.823791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.791 [2024-12-14 03:17:17.823811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.791 [2024-12-14 03:17:17.823819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.791 [2024-12-14 03:17:17.835924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.791 [2024-12-14 03:17:17.835945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.791 [2024-12-14 03:17:17.835953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.791 [2024-12-14 03:17:17.848893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.791 [2024-12-14 03:17:17.848918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.791 [2024-12-14 03:17:17.848926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.791 [2024-12-14 03:17:17.861215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.791 [2024-12-14 03:17:17.861235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.791 [2024-12-14 03:17:17.861243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.791 [2024-12-14 03:17:17.872179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.791 [2024-12-14 03:17:17.872198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.791 [2024-12-14 03:17:17.872207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.791 [2024-12-14 03:17:17.880425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.791 [2024-12-14 03:17:17.880445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.791 [2024-12-14 03:17:17.880453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.791 [2024-12-14 03:17:17.892072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.791 [2024-12-14 03:17:17.892092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.791 [2024-12-14 03:17:17.892101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.791 [2024-12-14 03:17:17.900141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.791 [2024-12-14 03:17:17.900161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.791 [2024-12-14 03:17:17.900170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.791 [2024-12-14 03:17:17.912156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:02.791 [2024-12-14 03:17:17.912176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.791 [2024-12-14 03:17:17.912184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.050 [2024-12-14 03:17:17.924448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.050 [2024-12-14 03:17:17.924467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.050 [2024-12-14 03:17:17.924475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.050 [2024-12-14 03:17:17.937093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.050 [2024-12-14 03:17:17.937114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.050 [2024-12-14 03:17:17.937122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.050 [2024-12-14 03:17:17.948463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.050 [2024-12-14 03:17:17.948483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.050 [2024-12-14 03:17:17.948491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.050 [2024-12-14 03:17:17.962100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.050 [2024-12-14 03:17:17.962121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.051 [2024-12-14 03:17:17.962129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.051 [2024-12-14 03:17:17.973169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.051 [2024-12-14 03:17:17.973189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.051 [2024-12-14 03:17:17.973197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.051 [2024-12-14 03:17:17.981674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.051 [2024-12-14 03:17:17.981694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.051 [2024-12-14 03:17:17.981702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.051 [2024-12-14 03:17:17.994047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.051 [2024-12-14 03:17:17.994069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.051 [2024-12-14 03:17:17.994078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.051 [2024-12-14 03:17:18.002467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.051 [2024-12-14 03:17:18.002487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.051 [2024-12-14 03:17:18.002495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.051 [2024-12-14 03:17:18.013595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.051 [2024-12-14 03:17:18.013615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.051 [2024-12-14 03:17:18.013624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.051 [2024-12-14 03:17:18.022983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.051 [2024-12-14 03:17:18.023004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.051 [2024-12-14 03:17:18.023012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.051 [2024-12-14 03:17:18.031606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.051 [2024-12-14 03:17:18.031626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.051 [2024-12-14 03:17:18.031638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.051 [2024-12-14 03:17:18.041888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.051 [2024-12-14 03:17:18.041910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.051 [2024-12-14 03:17:18.041918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.051 [2024-12-14 03:17:18.050138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.051 [2024-12-14 03:17:18.050158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.051 [2024-12-14 03:17:18.050166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.051 [2024-12-14 03:17:18.061072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.051 [2024-12-14 03:17:18.061092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.051 [2024-12-14 03:17:18.061099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.051 [2024-12-14 03:17:18.069390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.051 [2024-12-14 03:17:18.069410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.051 [2024-12-14 03:17:18.069418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.051 [2024-12-14 03:17:18.080445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.051 [2024-12-14 03:17:18.080465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.051 [2024-12-14 03:17:18.080473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.051 [2024-12-14 03:17:18.092246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.051 [2024-12-14 03:17:18.092265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.051 [2024-12-14 03:17:18.092273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.051 [2024-12-14 03:17:18.100321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.051 [2024-12-14 03:17:18.100341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.051 [2024-12-14 03:17:18.100350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.051 [2024-12-14 03:17:18.111398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.051 [2024-12-14 03:17:18.111417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.051 [2024-12-14 03:17:18.111425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.051 [2024-12-14 03:17:18.120945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.051 [2024-12-14 03:17:18.120969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.051 [2024-12-14 03:17:18.120977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.051 [2024-12-14 03:17:18.129753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.051 [2024-12-14 03:17:18.129772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.051 [2024-12-14 03:17:18.129781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.051 [2024-12-14 03:17:18.139078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.051 [2024-12-14 03:17:18.139099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.051 [2024-12-14 03:17:18.139107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.051 [2024-12-14 03:17:18.148158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.051 [2024-12-14 03:17:18.148178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.051 [2024-12-14 03:17:18.148186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.051 [2024-12-14 03:17:18.157441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.051 [2024-12-14 03:17:18.157460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.051 [2024-12-14 03:17:18.157468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.051 [2024-12-14 03:17:18.166366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.051 [2024-12-14 03:17:18.166386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.051 [2024-12-14 03:17:18.166395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.051 [2024-12-14 03:17:18.176231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.051 [2024-12-14 03:17:18.176252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.051 [2024-12-14 03:17:18.176260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.310 [2024-12-14 03:17:18.184957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.310 [2024-12-14 03:17:18.184978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.310 [2024-12-14 03:17:18.184986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.310 [2024-12-14 03:17:18.194538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.310 [2024-12-14 03:17:18.194557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.310 [2024-12-14 03:17:18.194565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.310 [2024-12-14 03:17:18.204528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.310 [2024-12-14 03:17:18.204548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.310 [2024-12-14 03:17:18.204556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.310 [2024-12-14 03:17:18.212773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.310 [2024-12-14 03:17:18.212793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.310 [2024-12-14 03:17:18.212801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.310 [2024-12-14 03:17:18.225365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.310 [2024-12-14 03:17:18.225385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.310 [2024-12-14 03:17:18.225393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.310 [2024-12-14 03:17:18.237587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.310 [2024-12-14 03:17:18.237607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.310 [2024-12-14 03:17:18.237615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.310 [2024-12-14 03:17:18.249872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.310 [2024-12-14 03:17:18.249892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.310 [2024-12-14 03:17:18.249900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.310 [2024-12-14 03:17:18.261459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.311 [2024-12-14 03:17:18.261479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.311 [2024-12-14 03:17:18.261487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.311 [2024-12-14 03:17:18.269085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.311 [2024-12-14 03:17:18.269105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.311 [2024-12-14 03:17:18.269113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.311 [2024-12-14 03:17:18.280483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.311 [2024-12-14 03:17:18.280503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.311 [2024-12-14 03:17:18.280511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.311 [2024-12-14 03:17:18.290762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.311 [2024-12-14 03:17:18.290785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.311 [2024-12-14 03:17:18.290793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.311 [2024-12-14 03:17:18.301561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.311 [2024-12-14 03:17:18.301582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.311 [2024-12-14 03:17:18.301590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.311 [2024-12-14 03:17:18.312536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.311 [2024-12-14 03:17:18.312557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.311 [2024-12-14 03:17:18.312565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.311 [2024-12-14 03:17:18.320361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.311 [2024-12-14 03:17:18.320384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.311 [2024-12-14 03:17:18.320391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.311 [2024-12-14 03:17:18.331513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.311 [2024-12-14 03:17:18.331535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.311 [2024-12-14 03:17:18.331543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.311 [2024-12-14 03:17:18.341055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.311 [2024-12-14 03:17:18.341075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.311 [2024-12-14 03:17:18.341082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.311 [2024-12-14 03:17:18.349812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.311 [2024-12-14 03:17:18.349832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.311 [2024-12-14 03:17:18.349840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.311 [2024-12-14 03:17:18.361443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.311 [2024-12-14 03:17:18.361463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.311 [2024-12-14 03:17:18.361471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.311 [2024-12-14 03:17:18.373638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.311 [2024-12-14 03:17:18.373658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.311 [2024-12-14 03:17:18.373667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.311 [2024-12-14 03:17:18.384896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.311 [2024-12-14 03:17:18.384917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.311 [2024-12-14 03:17:18.384925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.311 [2024-12-14 03:17:18.396345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.311 [2024-12-14 03:17:18.396366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.311 [2024-12-14 03:17:18.396374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.311 [2024-12-14 03:17:18.404572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.311 [2024-12-14 03:17:18.404593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.311 [2024-12-14 03:17:18.404601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.311 [2024-12-14 03:17:18.415602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.311 [2024-12-14 03:17:18.415622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.311 [2024-12-14 03:17:18.415631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.311 [2024-12-14 03:17:18.423877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.311 [2024-12-14 03:17:18.423897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.311 [2024-12-14 03:17:18.423905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.311 [2024-12-14 03:17:18.435342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.311 [2024-12-14 03:17:18.435362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.311 [2024-12-14 03:17:18.435370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.570 [2024-12-14 03:17:18.447776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.570 [2024-12-14 03:17:18.447808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.570 [2024-12-14 03:17:18.447816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.570 [2024-12-14 03:17:18.456457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.570 [2024-12-14 03:17:18.456478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.570 [2024-12-14 03:17:18.456486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.570 [2024-12-14 03:17:18.467800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.570 [2024-12-14 03:17:18.467820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.570 [2024-12-14 03:17:18.467832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.570 [2024-12-14 03:17:18.477265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.570 [2024-12-14 03:17:18.477285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.570 [2024-12-14 03:17:18.477293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.570 [2024-12-14 03:17:18.485552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.570 [2024-12-14 03:17:18.485573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.570 [2024-12-14 03:17:18.485581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.571 [2024-12-14 03:17:18.495303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.571 [2024-12-14 03:17:18.495327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.571 [2024-12-14 03:17:18.495336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.571 [2024-12-14 03:17:18.505454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.571 [2024-12-14 03:17:18.505474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.571 [2024-12-14 03:17:18.505482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.571 [2024-12-14 03:17:18.513771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.571 [2024-12-14 03:17:18.513792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.571 [2024-12-14 03:17:18.513800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.571 [2024-12-14 03:17:18.523986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.571 [2024-12-14 03:17:18.524005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.571 [2024-12-14 03:17:18.524014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.571 [2024-12-14 03:17:18.535229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.571 [2024-12-14 03:17:18.535249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.571 [2024-12-14 03:17:18.535258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.571 [2024-12-14 03:17:18.548464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.571 [2024-12-14 03:17:18.548484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.571 [2024-12-14 03:17:18.548492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.571 [2024-12-14 03:17:18.556551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.571 [2024-12-14 03:17:18.556575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.571 [2024-12-14 03:17:18.556583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.571 [2024-12-14 03:17:18.566439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.571 [2024-12-14 03:17:18.566460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.571 [2024-12-14 03:17:18.566468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.571 [2024-12-14 03:17:18.576403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.571 [2024-12-14 03:17:18.576423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.571 [2024-12-14 03:17:18.576431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.571 [2024-12-14 03:17:18.585851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.571 [2024-12-14 03:17:18.585871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.571 [2024-12-14 03:17:18.585879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.571 [2024-12-14 03:17:18.595079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.571 [2024-12-14 03:17:18.595099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.571 [2024-12-14 03:17:18.595107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.571 [2024-12-14 03:17:18.604431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.571 [2024-12-14 03:17:18.604451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.571 [2024-12-14 03:17:18.604459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.571 [2024-12-14 03:17:18.616895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.571 [2024-12-14 03:17:18.616915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.571 [2024-12-14 03:17:18.616922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.571 [2024-12-14 03:17:18.624871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.571 [2024-12-14 03:17:18.624890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.571 [2024-12-14 03:17:18.624899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.571 [2024-12-14 03:17:18.636060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.571 [2024-12-14 03:17:18.636080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.571 [2024-12-14 03:17:18.636088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.571 25358.00 IOPS, 99.05 MiB/s [2024-12-14T02:17:18.704Z] [2024-12-14 03:17:18.649248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23096e0) 00:36:03.571 [2024-12-14 03:17:18.649266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.571 [2024-12-14 03:17:18.649274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.571 00:36:03.571 Latency(us) 00:36:03.571 [2024-12-14T02:17:18.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:03.571 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:03.571 nvme0n1 : 2.00 25369.97 99.10 0.00 0.00 5040.31 2387.38 17476.27 00:36:03.571 [2024-12-14T02:17:18.704Z] =================================================================================================================== 00:36:03.571 [2024-12-14T02:17:18.704Z] Total : 25369.97 99.10 0.00 0.00 5040.31 2387.38 17476.27 00:36:03.571 { 00:36:03.571 "results": [ 00:36:03.571 { 00:36:03.571 "job": "nvme0n1", 00:36:03.571 "core_mask": "0x2", 00:36:03.571 "workload": "randread", 00:36:03.571 "status": "finished", 00:36:03.571 "queue_depth": 128, 00:36:03.571 "io_size": 4096, 00:36:03.571 "runtime": 2.004102, 00:36:03.571 "iops": 25369.966199325183, 00:36:03.571 "mibps": 99.101430466114, 00:36:03.571 "io_failed": 0, 00:36:03.571 "io_timeout": 0, 00:36:03.571 "avg_latency_us": 5040.308370327912, 00:36:03.571 "min_latency_us": 2387.382857142857, 00:36:03.571 "max_latency_us": 17476.266666666666 00:36:03.571 } 00:36:03.571 ], 00:36:03.571 "core_count": 1 00:36:03.571 } 00:36:03.571 03:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:03.571 03:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:03.571 03:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:03.571 | .driver_specific 00:36:03.571 | .nvme_error 00:36:03.571 | .status_code 00:36:03.571 | .command_transient_transport_error' 00:36:03.571 03:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:03.830 03:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 199 > 0 )) 00:36:03.830 03:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 386219 00:36:03.830 03:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 386219 ']' 00:36:03.830 03:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 386219 00:36:03.830 03:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:03.830 03:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:03.830 03:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 386219 00:36:03.830 03:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:03.830 03:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:03.830 03:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 386219' 00:36:03.830 killing process with pid 386219 00:36:03.830 03:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 386219 00:36:03.830 Received shutdown signal, test time was about 2.000000 seconds 00:36:03.830 00:36:03.830 Latency(us) 00:36:03.830 [2024-12-14T02:17:18.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:03.830 [2024-12-14T02:17:18.963Z] =================================================================================================================== 00:36:03.830 [2024-12-14T02:17:18.963Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:03.830 03:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 386219 00:36:04.089 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:04.089 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:04.089 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:04.089 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:04.089 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:04.089 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=386267 00:36:04.089 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 386267 /var/tmp/bperf.sock 00:36:04.089 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:04.089 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 386267 ']' 00:36:04.089 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:04.089 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:04.089 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:04.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:04.089 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:04.089 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:04.089 [2024-12-14 03:17:19.112613] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:04.089 [2024-12-14 03:17:19.112658] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid386267 ] 00:36:04.089 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:04.089 Zero copy mechanism will not be used. 00:36:04.089 [2024-12-14 03:17:19.187628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:04.089 [2024-12-14 03:17:19.209824] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:04.348 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:04.348 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:04.348 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:04.348 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:04.607 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:04.607 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.607 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:04.607 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.607 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:04.607 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:04.607 nvme0n1 00:36:04.866 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:04.866 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.866 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:04.866 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.866 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:04.866 03:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:04.866 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:04.866 Zero copy mechanism will not be used. 00:36:04.866 Running I/O for 2 seconds... 00:36:04.866 [2024-12-14 03:17:19.861667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.866 [2024-12-14 03:17:19.861700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.866 [2024-12-14 03:17:19.861711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:04.866 [2024-12-14 03:17:19.867380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.866 [2024-12-14 03:17:19.867403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.866 [2024-12-14 03:17:19.867412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:04.866 [2024-12-14 03:17:19.872627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.866 [2024-12-14 03:17:19.872649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.866 [2024-12-14 03:17:19.872658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:04.866 [2024-12-14 03:17:19.875461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.866 [2024-12-14 03:17:19.875483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.866 [2024-12-14 03:17:19.875492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:04.866 [2024-12-14 03:17:19.880755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.867 [2024-12-14 03:17:19.880776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.867 [2024-12-14 03:17:19.880784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:04.867 [2024-12-14 03:17:19.885987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.867 [2024-12-14 03:17:19.886010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.867 [2024-12-14 03:17:19.886018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:04.867 [2024-12-14 03:17:19.891185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.867 [2024-12-14 03:17:19.891211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.867 [2024-12-14 03:17:19.891220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:04.867 [2024-12-14 03:17:19.896449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.867 [2024-12-14 03:17:19.896470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.867 [2024-12-14 03:17:19.896479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:04.867 [2024-12-14 03:17:19.901766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.867 [2024-12-14 03:17:19.901788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.867 [2024-12-14 03:17:19.901797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:04.867 [2024-12-14 03:17:19.906959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.867 [2024-12-14 03:17:19.906981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.867 [2024-12-14 03:17:19.906989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:04.867 [2024-12-14 03:17:19.912247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.867 [2024-12-14 03:17:19.912268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.867 [2024-12-14 03:17:19.912276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:04.867 [2024-12-14 03:17:19.917456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.867 [2024-12-14 03:17:19.917478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.867 [2024-12-14 03:17:19.917486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:04.867 [2024-12-14 03:17:19.922710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.867 [2024-12-14 03:17:19.922731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.867 [2024-12-14 03:17:19.922739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:04.867 [2024-12-14 03:17:19.927870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.867 [2024-12-14 03:17:19.927892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.867 [2024-12-14 03:17:19.927901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:04.867 [2024-12-14 03:17:19.933154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.867 [2024-12-14 03:17:19.933175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.867 [2024-12-14 03:17:19.933184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:04.867 [2024-12-14 03:17:19.938280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.867 [2024-12-14 03:17:19.938302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.867 [2024-12-14 03:17:19.938311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:04.867 [2024-12-14 03:17:19.943398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.867 [2024-12-14 03:17:19.943419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.867 [2024-12-14 03:17:19.943428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:04.867 [2024-12-14 03:17:19.948614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.867 [2024-12-14 03:17:19.948636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.867 [2024-12-14 03:17:19.948645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:04.867 [2024-12-14 03:17:19.953879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.867 [2024-12-14 03:17:19.953900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.867 [2024-12-14 03:17:19.953909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:04.867 [2024-12-14 03:17:19.959116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.867 [2024-12-14 03:17:19.959136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.867 [2024-12-14 03:17:19.959145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:04.867 [2024-12-14 03:17:19.964378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.867 [2024-12-14 03:17:19.964399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.867 [2024-12-14 03:17:19.964407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:04.867 [2024-12-14 03:17:19.969835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.867 [2024-12-14 03:17:19.969857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.867 [2024-12-14 03:17:19.969866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:04.867 [2024-12-14 03:17:19.975230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.867 [2024-12-14 03:17:19.975252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.867 [2024-12-14 03:17:19.975260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:04.867 [2024-12-14 03:17:19.980632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.867 [2024-12-14 03:17:19.980655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.867 [2024-12-14 03:17:19.980667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:04.867 [2024-12-14 03:17:19.985799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.867 [2024-12-14 03:17:19.985821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.867 [2024-12-14 03:17:19.985829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:04.867 [2024-12-14 03:17:19.991049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.867 [2024-12-14 03:17:19.991070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.867 [2024-12-14 03:17:19.991080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:04.867 [2024-12-14 03:17:19.996200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:04.867 [2024-12-14 03:17:19.996222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.867 [2024-12-14 03:17:19.996230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.127 [2024-12-14 03:17:20.002358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.127 [2024-12-14 03:17:20.002382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.127 [2024-12-14 03:17:20.002391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.127 [2024-12-14 03:17:20.008964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.127 [2024-12-14 03:17:20.008990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.127 [2024-12-14 03:17:20.009000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.127 [2024-12-14 03:17:20.014456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.127 [2024-12-14 03:17:20.014478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.127 [2024-12-14 03:17:20.014486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.127 [2024-12-14 03:17:20.019778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.127 [2024-12-14 03:17:20.019799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.127 [2024-12-14 03:17:20.019808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.127 [2024-12-14 03:17:20.025341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.127 [2024-12-14 03:17:20.025365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.127 [2024-12-14 03:17:20.025374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.127 [2024-12-14 03:17:20.030753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.127 [2024-12-14 03:17:20.030782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.127 [2024-12-14 03:17:20.030791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.127 [2024-12-14 03:17:20.036065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.127 [2024-12-14 03:17:20.036088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.127 [2024-12-14 03:17:20.036096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.127 [2024-12-14 03:17:20.041470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.127 [2024-12-14 03:17:20.041492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.127 [2024-12-14 03:17:20.041501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.127 [2024-12-14 03:17:20.046796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.127 [2024-12-14 03:17:20.046819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.127 [2024-12-14 03:17:20.046827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.127 [2024-12-14 03:17:20.052027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.127 [2024-12-14 03:17:20.052047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.127 [2024-12-14 03:17:20.052056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.127 [2024-12-14 03:17:20.057329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.127 [2024-12-14 03:17:20.057351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.127 [2024-12-14 03:17:20.057360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.127 [2024-12-14 03:17:20.062530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.127 [2024-12-14 03:17:20.062551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.127 [2024-12-14 03:17:20.062559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.127 [2024-12-14 03:17:20.067661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.127 [2024-12-14 03:17:20.067683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.127 [2024-12-14 03:17:20.067691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.127 [2024-12-14 03:17:20.072830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.127 [2024-12-14 03:17:20.072851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.127 [2024-12-14 03:17:20.072859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.127 [2024-12-14 03:17:20.078028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.127 [2024-12-14 03:17:20.078050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.127 [2024-12-14 03:17:20.078059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.127 [2024-12-14 03:17:20.083397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.127 [2024-12-14 03:17:20.083418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.127 [2024-12-14 03:17:20.083427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.127 [2024-12-14 03:17:20.088710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.127 [2024-12-14 03:17:20.088732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.127 [2024-12-14 03:17:20.088741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.094289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.094316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.094326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.099606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.099628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.099636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.104507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.104529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.104537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.109996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.110018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.110027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.115617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.115684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.115705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.121341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.121364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.121376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.126751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.126773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.126781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.132127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.132149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.132158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.137518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.137540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.137548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.142970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.142992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.143000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.148386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.148407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.148415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.153790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.153812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.153820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.159111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.159132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.159140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.164458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.164480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.164488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.169836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.169862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.169870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.175054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.175076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.175084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.180665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.180687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.180695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.186082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.186103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.186111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.191509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.191530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.191538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.196777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.196798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.196807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.201903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.201925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.201933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.207140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.207161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.207169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.212581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.212602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.212610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.217698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.217731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.217739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.222892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.222914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.222922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.228081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.228103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.228111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.233128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.128 [2024-12-14 03:17:20.233149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.128 [2024-12-14 03:17:20.233157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.128 [2024-12-14 03:17:20.238267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.129 [2024-12-14 03:17:20.238288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.129 [2024-12-14 03:17:20.238295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.129 [2024-12-14 03:17:20.243359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.129 [2024-12-14 03:17:20.243380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.129 [2024-12-14 03:17:20.243388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.129 [2024-12-14 03:17:20.248520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.129 [2024-12-14 03:17:20.248541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.129 [2024-12-14 03:17:20.248549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.129 [2024-12-14 03:17:20.253681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.129 [2024-12-14 03:17:20.253703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.129 [2024-12-14 03:17:20.253711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.388 [2024-12-14 03:17:20.258930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.388 [2024-12-14 03:17:20.258952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.388 [2024-12-14 03:17:20.258963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.388 [2024-12-14 03:17:20.264284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.388 [2024-12-14 03:17:20.264307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.388 [2024-12-14 03:17:20.264322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.388 [2024-12-14 03:17:20.269622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.388 [2024-12-14 03:17:20.269642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.388 [2024-12-14 03:17:20.269650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.388 [2024-12-14 03:17:20.274854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.388 [2024-12-14 03:17:20.274875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.388 [2024-12-14 03:17:20.274883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.388 [2024-12-14 03:17:20.280346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.388 [2024-12-14 03:17:20.280369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.388 [2024-12-14 03:17:20.280377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.388 [2024-12-14 03:17:20.286070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.388 [2024-12-14 03:17:20.286091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.388 [2024-12-14 03:17:20.286099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.388 [2024-12-14 03:17:20.292371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.388 [2024-12-14 03:17:20.292393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.388 [2024-12-14 03:17:20.292402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.388 [2024-12-14 03:17:20.297443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.388 [2024-12-14 03:17:20.297465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.388 [2024-12-14 03:17:20.297473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.388 [2024-12-14 03:17:20.302953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.388 [2024-12-14 03:17:20.302977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.388 [2024-12-14 03:17:20.302985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.388 [2024-12-14 03:17:20.308252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.388 [2024-12-14 03:17:20.308274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.388 [2024-12-14 03:17:20.308282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.388 [2024-12-14 03:17:20.313541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.388 [2024-12-14 03:17:20.313563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.388 [2024-12-14 03:17:20.313571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.388 [2024-12-14 03:17:20.318951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.388 [2024-12-14 03:17:20.318974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.388 [2024-12-14 03:17:20.318982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.388 [2024-12-14 03:17:20.324248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.388 [2024-12-14 03:17:20.324270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.388 [2024-12-14 03:17:20.324278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.388 [2024-12-14 03:17:20.329507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.388 [2024-12-14 03:17:20.329529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.388 [2024-12-14 03:17:20.329538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.388 [2024-12-14 03:17:20.334800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.388 [2024-12-14 03:17:20.334821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.388 [2024-12-14 03:17:20.334830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.388 [2024-12-14 03:17:20.339813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.388 [2024-12-14 03:17:20.339835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.388 [2024-12-14 03:17:20.339843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.388 [2024-12-14 03:17:20.344959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.388 [2024-12-14 03:17:20.344981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.388 [2024-12-14 03:17:20.344989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.350211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.350232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.350244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.355557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.355579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.355587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.360949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.360971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.360979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.366265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.366286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.366294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.371643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.371671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.371680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.377027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.377049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.377058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.382287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.382310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.382325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.387560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.387582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.387591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.392810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.392831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.392839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.398158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.398183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.398192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.403501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.403523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.403531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.408795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.408817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.408826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.414096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.414117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.414125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.419360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.419383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.419391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.424755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.424778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.424786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.430138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.430159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.430167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.433632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.433653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.433662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.440295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.440323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.440332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.445735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.445756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.445765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.451047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.451068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.451076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.456375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.456397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.456406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.462751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.462773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.462781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.470695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.470717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.470726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.477841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.477864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.477872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.483569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.483592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.483601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.489372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.489394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.489402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.495151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.495173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.389 [2024-12-14 03:17:20.495186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.389 [2024-12-14 03:17:20.500357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.389 [2024-12-14 03:17:20.500378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.390 [2024-12-14 03:17:20.500386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.390 [2024-12-14 03:17:20.505584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.390 [2024-12-14 03:17:20.505606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.390 [2024-12-14 03:17:20.505614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.390 [2024-12-14 03:17:20.510814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.390 [2024-12-14 03:17:20.510836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.390 [2024-12-14 03:17:20.510844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.390 [2024-12-14 03:17:20.515457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.390 [2024-12-14 03:17:20.515478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.390 [2024-12-14 03:17:20.515486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.649 [2024-12-14 03:17:20.520463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.649 [2024-12-14 03:17:20.520485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.649 [2024-12-14 03:17:20.520494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.649 [2024-12-14 03:17:20.525459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.649 [2024-12-14 03:17:20.525481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.649 [2024-12-14 03:17:20.525489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.649 [2024-12-14 03:17:20.530455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.649 [2024-12-14 03:17:20.530476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.649 [2024-12-14 03:17:20.530485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.649 [2024-12-14 03:17:20.535446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.649 [2024-12-14 03:17:20.535468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.649 [2024-12-14 03:17:20.535477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.649 [2024-12-14 03:17:20.540540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.649 [2024-12-14 03:17:20.540566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.649 [2024-12-14 03:17:20.540574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.649 [2024-12-14 03:17:20.545747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.649 [2024-12-14 03:17:20.545769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.649 [2024-12-14 03:17:20.545778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.649 [2024-12-14 03:17:20.550985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.649 [2024-12-14 03:17:20.551007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.649 [2024-12-14 03:17:20.551016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.649 [2024-12-14 03:17:20.556237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.649 [2024-12-14 03:17:20.556259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.649 [2024-12-14 03:17:20.556267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.649 [2024-12-14 03:17:20.561410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.649 [2024-12-14 03:17:20.561432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.649 [2024-12-14 03:17:20.561440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.649 [2024-12-14 03:17:20.566614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.566636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.566645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.571779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.571801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.571810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.576965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.576987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.576995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.582186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.582208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.582216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.587614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.587637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.587646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.592889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.592912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.592920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.598130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.598152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.598160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.603428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.603450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.603459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.608369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.608391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.608400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.613571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.613593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.613602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.618845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.618866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.618874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.624110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.624132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.624141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.629408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.629429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.629441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.634577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.634599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.634608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.639781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.639803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.639811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.645004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.645027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.645035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.650227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.650249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.650257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.655432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.655453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.655461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.660944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.660966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.660975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.666591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.666613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.666622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.671793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.671816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.671825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.676936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.676958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.676967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.682179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.682201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.682210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.687396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.687418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.687426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.692622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.692645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.692653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.696049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.696071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.696079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.699824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.650 [2024-12-14 03:17:20.699846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.650 [2024-12-14 03:17:20.699855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.650 [2024-12-14 03:17:20.705140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.651 [2024-12-14 03:17:20.705162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.651 [2024-12-14 03:17:20.705170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.651 [2024-12-14 03:17:20.710136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.651 [2024-12-14 03:17:20.710158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.651 [2024-12-14 03:17:20.710166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.651 [2024-12-14 03:17:20.715096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.651 [2024-12-14 03:17:20.715118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.651 [2024-12-14 03:17:20.715130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.651 [2024-12-14 03:17:20.720135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.651 [2024-12-14 03:17:20.720157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.651 [2024-12-14 03:17:20.720165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.651 [2024-12-14 03:17:20.725302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.651 [2024-12-14 03:17:20.725329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.651 [2024-12-14 03:17:20.725339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.651 [2024-12-14 03:17:20.730477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.651 [2024-12-14 03:17:20.730499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.651 [2024-12-14 03:17:20.730507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.651 [2024-12-14 03:17:20.735728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.651 [2024-12-14 03:17:20.735750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.651 [2024-12-14 03:17:20.735758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.651 [2024-12-14 03:17:20.740561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.651 [2024-12-14 03:17:20.740583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.651 [2024-12-14 03:17:20.740592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.651 [2024-12-14 03:17:20.745665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.651 [2024-12-14 03:17:20.745687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.651 [2024-12-14 03:17:20.745695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.651 [2024-12-14 03:17:20.750638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.651 [2024-12-14 03:17:20.750660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.651 [2024-12-14 03:17:20.750668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.651 [2024-12-14 03:17:20.755616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.651 [2024-12-14 03:17:20.755638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.651 [2024-12-14 03:17:20.755649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.651 [2024-12-14 03:17:20.760646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.651 [2024-12-14 03:17:20.760677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.651 [2024-12-14 03:17:20.760687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.651 [2024-12-14 03:17:20.765720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.651 [2024-12-14 03:17:20.765742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.651 [2024-12-14 03:17:20.765751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.651 [2024-12-14 03:17:20.770916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.651 [2024-12-14 03:17:20.770937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.651 [2024-12-14 03:17:20.770945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.651 [2024-12-14 03:17:20.776002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.651 [2024-12-14 03:17:20.776024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.651 [2024-12-14 03:17:20.776032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.911 [2024-12-14 03:17:20.781180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.911 [2024-12-14 03:17:20.781202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.911 [2024-12-14 03:17:20.781210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.911 [2024-12-14 03:17:20.786340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.911 [2024-12-14 03:17:20.786362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.911 [2024-12-14 03:17:20.786370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.911 [2024-12-14 03:17:20.791471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.911 [2024-12-14 03:17:20.791492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.911 [2024-12-14 03:17:20.791500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.911 [2024-12-14 03:17:20.796603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.911 [2024-12-14 03:17:20.796624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.911 [2024-12-14 03:17:20.796631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.911 [2024-12-14 03:17:20.801776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.911 [2024-12-14 03:17:20.801795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.911 [2024-12-14 03:17:20.801804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.911 [2024-12-14 03:17:20.806914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.911 [2024-12-14 03:17:20.806935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.911 [2024-12-14 03:17:20.806943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.911 [2024-12-14 03:17:20.812031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.911 [2024-12-14 03:17:20.812053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.911 [2024-12-14 03:17:20.812061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.911 [2024-12-14 03:17:20.816890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.911 [2024-12-14 03:17:20.816912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.911 [2024-12-14 03:17:20.816921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.911 [2024-12-14 03:17:20.821934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.911 [2024-12-14 03:17:20.821955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.911 [2024-12-14 03:17:20.821963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.911 [2024-12-14 03:17:20.827092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.911 [2024-12-14 03:17:20.827114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.911 [2024-12-14 03:17:20.827122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.911 [2024-12-14 03:17:20.832290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.911 [2024-12-14 03:17:20.832311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.911 [2024-12-14 03:17:20.832327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.911 [2024-12-14 03:17:20.837483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.911 [2024-12-14 03:17:20.837504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.911 [2024-12-14 03:17:20.837512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.911 [2024-12-14 03:17:20.842644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.911 [2024-12-14 03:17:20.842666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.911 [2024-12-14 03:17:20.842674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.911 [2024-12-14 03:17:20.847812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.911 [2024-12-14 03:17:20.847834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.911 [2024-12-14 03:17:20.847846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.911 [2024-12-14 03:17:20.852978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.911 [2024-12-14 03:17:20.852999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.911 [2024-12-14 03:17:20.853007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.911 5857.00 IOPS, 732.12 MiB/s [2024-12-14T02:17:21.044Z] [2024-12-14 03:17:20.858735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.911 [2024-12-14 03:17:20.858757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.911 [2024-12-14 03:17:20.858765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.911 [2024-12-14 03:17:20.863847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.911 [2024-12-14 03:17:20.863868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.911 [2024-12-14 03:17:20.863877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.911 [2024-12-14 03:17:20.868932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.911 [2024-12-14 03:17:20.868954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.911 [2024-12-14 03:17:20.868962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.911 [2024-12-14 03:17:20.874090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.911 [2024-12-14 03:17:20.874113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.911 [2024-12-14 03:17:20.874121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.911 [2024-12-14 03:17:20.879278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.911 [2024-12-14 03:17:20.879300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.911 [2024-12-14 03:17:20.879308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.911 [2024-12-14 03:17:20.884517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.911 [2024-12-14 03:17:20.884538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.911 [2024-12-14 03:17:20.884547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.911 [2024-12-14 03:17:20.889677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.911 [2024-12-14 03:17:20.889699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.911 [2024-12-14 03:17:20.889707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.911 [2024-12-14 03:17:20.894873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.911 [2024-12-14 03:17:20.894895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:20.894903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:20.899262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:20.899283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:20.899291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:20.902259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:20.902280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:20.902288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:20.907326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:20.907347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:20.907355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:20.912343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:20.912364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:20.912371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:20.917255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:20.917276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:20.917284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:20.922146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:20.922167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:20.922175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:20.927105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:20.927127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:20.927135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:20.932006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:20.932026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:20.932038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:20.938105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:20.938128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:20.938136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:20.943464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:20.943485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:20.943494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:20.948562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:20.948583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:20.948591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:20.953648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:20.953670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:20.953678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:20.958803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:20.958825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:20.958832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:20.963939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:20.963960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:20.963968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:20.969074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:20.969096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:20.969105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:20.974243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:20.974265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:20.974273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:20.979383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:20.979408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:20.979416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:20.984530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:20.984551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:20.984559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:20.989638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:20.989660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:20.989668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:20.994785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:20.994806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:20.994814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:20.999842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:20.999863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:20.999871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:21.004971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:21.004992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:21.005000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:21.010114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:21.010135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:21.010143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:21.015222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:21.015244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:21.015251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:21.020345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:21.020365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:21.020373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:21.025460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:21.025481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.912 [2024-12-14 03:17:21.025489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:05.912 [2024-12-14 03:17:21.030582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.912 [2024-12-14 03:17:21.030604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.913 [2024-12-14 03:17:21.030611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:05.913 [2024-12-14 03:17:21.035729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.913 [2024-12-14 03:17:21.035749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.913 [2024-12-14 03:17:21.035756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:05.913 [2024-12-14 03:17:21.040904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:05.913 [2024-12-14 03:17:21.040925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.913 [2024-12-14 03:17:21.040934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.172 [2024-12-14 03:17:21.046107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.172 [2024-12-14 03:17:21.046128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.172 [2024-12-14 03:17:21.046136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.172 [2024-12-14 03:17:21.051328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.172 [2024-12-14 03:17:21.051348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.172 [2024-12-14 03:17:21.051356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.172 [2024-12-14 03:17:21.056079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.172 [2024-12-14 03:17:21.056100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.172 [2024-12-14 03:17:21.056108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.172 [2024-12-14 03:17:21.061108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.172 [2024-12-14 03:17:21.061129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.172 [2024-12-14 03:17:21.061137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.172 [2024-12-14 03:17:21.066026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.172 [2024-12-14 03:17:21.066046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.172 [2024-12-14 03:17:21.066058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.172 [2024-12-14 03:17:21.070972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.172 [2024-12-14 03:17:21.070994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.172 [2024-12-14 03:17:21.071001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.172 [2024-12-14 03:17:21.075887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.172 [2024-12-14 03:17:21.075906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.172 [2024-12-14 03:17:21.075914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.172 [2024-12-14 03:17:21.080756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.172 [2024-12-14 03:17:21.080777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.172 [2024-12-14 03:17:21.080785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.085654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.085676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.085684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.090652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.090673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.090681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.095633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.095655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.095663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.100530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.100551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.100560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.105372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.105392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.105400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.110391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.110414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.110422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.115510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.115531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.115540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.120659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.120680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.120688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.125862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.125885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.125894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.131046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.131068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.131076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.136248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.136268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.136276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.141426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.141448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.141457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.146575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.146596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.146605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.151698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.151719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.151727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.156778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.156800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.156808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.161897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.161919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.161927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.167049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.167071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.167079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.172202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.172222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.172230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.177349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.177371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.177379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.182470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.182492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.182500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.187593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.187614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.187623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.192740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.192760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.192768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.197831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.197851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.197862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.202974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.202996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.203004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.208081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.208102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.208110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.213179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.213200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.213208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.218364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.218385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.173 [2024-12-14 03:17:21.218393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.173 [2024-12-14 03:17:21.223504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.173 [2024-12-14 03:17:21.223525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.174 [2024-12-14 03:17:21.223533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.174 [2024-12-14 03:17:21.228605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.174 [2024-12-14 03:17:21.228626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.174 [2024-12-14 03:17:21.228634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.174 [2024-12-14 03:17:21.233694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.174 [2024-12-14 03:17:21.233714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.174 [2024-12-14 03:17:21.233722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.174 [2024-12-14 03:17:21.238805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.174 [2024-12-14 03:17:21.238827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.174 [2024-12-14 03:17:21.238834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.174 [2024-12-14 03:17:21.243906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.174 [2024-12-14 03:17:21.243927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.174 [2024-12-14 03:17:21.243935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.174 [2024-12-14 03:17:21.248991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.174 [2024-12-14 03:17:21.249013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.174 [2024-12-14 03:17:21.249020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.174 [2024-12-14 03:17:21.254113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.174 [2024-12-14 03:17:21.254135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.174 [2024-12-14 03:17:21.254143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.174 [2024-12-14 03:17:21.259200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.174 [2024-12-14 03:17:21.259221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.174 [2024-12-14 03:17:21.259229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.174 [2024-12-14 03:17:21.264269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.174 [2024-12-14 03:17:21.264290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.174 [2024-12-14 03:17:21.264298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.174 [2024-12-14 03:17:21.269462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.174 [2024-12-14 03:17:21.269482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.174 [2024-12-14 03:17:21.269491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.174 [2024-12-14 03:17:21.274574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.174 [2024-12-14 03:17:21.274595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.174 [2024-12-14 03:17:21.274602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.174 [2024-12-14 03:17:21.279702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.174 [2024-12-14 03:17:21.279724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.174 [2024-12-14 03:17:21.279732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.174 [2024-12-14 03:17:21.284804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.174 [2024-12-14 03:17:21.284825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.174 [2024-12-14 03:17:21.284837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.174 [2024-12-14 03:17:21.289998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.174 [2024-12-14 03:17:21.290019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.174 [2024-12-14 03:17:21.290027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.174 [2024-12-14 03:17:21.295121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.174 [2024-12-14 03:17:21.295142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.174 [2024-12-14 03:17:21.295150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.174 [2024-12-14 03:17:21.300217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.174 [2024-12-14 03:17:21.300238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.174 [2024-12-14 03:17:21.300246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.434 [2024-12-14 03:17:21.305320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.434 [2024-12-14 03:17:21.305341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.434 [2024-12-14 03:17:21.305349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.434 [2024-12-14 03:17:21.310528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.434 [2024-12-14 03:17:21.310549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.434 [2024-12-14 03:17:21.310557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.434 [2024-12-14 03:17:21.315666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.434 [2024-12-14 03:17:21.315687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.434 [2024-12-14 03:17:21.315695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.434 [2024-12-14 03:17:21.320841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.434 [2024-12-14 03:17:21.320863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.434 [2024-12-14 03:17:21.320871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.434 [2024-12-14 03:17:21.325977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.434 [2024-12-14 03:17:21.325998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.434 [2024-12-14 03:17:21.326007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.434 [2024-12-14 03:17:21.331092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.434 [2024-12-14 03:17:21.331117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.434 [2024-12-14 03:17:21.331125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.434 [2024-12-14 03:17:21.336189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.434 [2024-12-14 03:17:21.336210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.434 [2024-12-14 03:17:21.336218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.434 [2024-12-14 03:17:21.341270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.434 [2024-12-14 03:17:21.341292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.434 [2024-12-14 03:17:21.341300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.434 [2024-12-14 03:17:21.346441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.434 [2024-12-14 03:17:21.346462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.434 [2024-12-14 03:17:21.346470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.434 [2024-12-14 03:17:21.351600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.434 [2024-12-14 03:17:21.351621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.434 [2024-12-14 03:17:21.351629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.434 [2024-12-14 03:17:21.356764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.434 [2024-12-14 03:17:21.356786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.434 [2024-12-14 03:17:21.356793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.434 [2024-12-14 03:17:21.361849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.434 [2024-12-14 03:17:21.361871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.434 [2024-12-14 03:17:21.361879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.434 [2024-12-14 03:17:21.366991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.434 [2024-12-14 03:17:21.367013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.434 [2024-12-14 03:17:21.367020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.434 [2024-12-14 03:17:21.372139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.434 [2024-12-14 03:17:21.372161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.434 [2024-12-14 03:17:21.372169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.434 [2024-12-14 03:17:21.377413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.434 [2024-12-14 03:17:21.377434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.434 [2024-12-14 03:17:21.377443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.434 [2024-12-14 03:17:21.382653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.434 [2024-12-14 03:17:21.382674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.434 [2024-12-14 03:17:21.382682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.434 [2024-12-14 03:17:21.387841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.434 [2024-12-14 03:17:21.387862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.434 [2024-12-14 03:17:21.387870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.434 [2024-12-14 03:17:21.393003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.434 [2024-12-14 03:17:21.393025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.434 [2024-12-14 03:17:21.393033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.434 [2024-12-14 03:17:21.398161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.434 [2024-12-14 03:17:21.398184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.434 [2024-12-14 03:17:21.398203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.434 [2024-12-14 03:17:21.403297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.434 [2024-12-14 03:17:21.403327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.434 [2024-12-14 03:17:21.403337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.434 [2024-12-14 03:17:21.408440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.408463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.408472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.413539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.413561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.413568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.418988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.419010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.419025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.424650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.424672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.424680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.429808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.429829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.429837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.434945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.434967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.434975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.440096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.440117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.440125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.445296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.445322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.445331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.450511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.450532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.450540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.455642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.455663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.455671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.460769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.460791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.460799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.465880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.465901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.465909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.471042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.471063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.471071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.476157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.476178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.476186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.481253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.481274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.481282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.486344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.486365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.486372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.491453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.491475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.491482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.496591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.496612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.496620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.501789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.501810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.501818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.506918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.506940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.506952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.512021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.512043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.512052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.517135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.517156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.517164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.522273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.522294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.522302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.527447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.527468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.527476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.532543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.532565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.532573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.537691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.537711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.537719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.542836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.542856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.542865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.547955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.435 [2024-12-14 03:17:21.547976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.435 [2024-12-14 03:17:21.547984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.435 [2024-12-14 03:17:21.553063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.436 [2024-12-14 03:17:21.553087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.436 [2024-12-14 03:17:21.553095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.436 [2024-12-14 03:17:21.558189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.436 [2024-12-14 03:17:21.558210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.436 [2024-12-14 03:17:21.558219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.436 [2024-12-14 03:17:21.563347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.436 [2024-12-14 03:17:21.563368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.436 [2024-12-14 03:17:21.563376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.697 [2024-12-14 03:17:21.568500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.697 [2024-12-14 03:17:21.568521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.697 [2024-12-14 03:17:21.568529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.697 [2024-12-14 03:17:21.573640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.697 [2024-12-14 03:17:21.573660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.697 [2024-12-14 03:17:21.573668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.697 [2024-12-14 03:17:21.578769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.697 [2024-12-14 03:17:21.578788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.697 [2024-12-14 03:17:21.578796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.697 [2024-12-14 03:17:21.583947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.697 [2024-12-14 03:17:21.583968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.697 [2024-12-14 03:17:21.583977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.697 [2024-12-14 03:17:21.589119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.697 [2024-12-14 03:17:21.589141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.697 [2024-12-14 03:17:21.589150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.697 [2024-12-14 03:17:21.594216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.697 [2024-12-14 03:17:21.594238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.697 [2024-12-14 03:17:21.594245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.697 [2024-12-14 03:17:21.599337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.697 [2024-12-14 03:17:21.599357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.697 [2024-12-14 03:17:21.599365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.697 [2024-12-14 03:17:21.604473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.697 [2024-12-14 03:17:21.604494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.697 [2024-12-14 03:17:21.604502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.697 [2024-12-14 03:17:21.609601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.697 [2024-12-14 03:17:21.609622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.697 [2024-12-14 03:17:21.609631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.697 [2024-12-14 03:17:21.614775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.697 [2024-12-14 03:17:21.614796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.697 [2024-12-14 03:17:21.614805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.697 [2024-12-14 03:17:21.619951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.697 [2024-12-14 03:17:21.619972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.697 [2024-12-14 03:17:21.619980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.697 [2024-12-14 03:17:21.625128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.697 [2024-12-14 03:17:21.625150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.697 [2024-12-14 03:17:21.625158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.697 [2024-12-14 03:17:21.630324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.697 [2024-12-14 03:17:21.630346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.697 [2024-12-14 03:17:21.630354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.697 [2024-12-14 03:17:21.635507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.697 [2024-12-14 03:17:21.635528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.697 [2024-12-14 03:17:21.635536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.697 [2024-12-14 03:17:21.640647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.697 [2024-12-14 03:17:21.640669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.697 [2024-12-14 03:17:21.640680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.697 [2024-12-14 03:17:21.645734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.697 [2024-12-14 03:17:21.645757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.697 [2024-12-14 03:17:21.645765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.697 [2024-12-14 03:17:21.650861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.697 [2024-12-14 03:17:21.650883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.697 [2024-12-14 03:17:21.650891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.697 [2024-12-14 03:17:21.655988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.697 [2024-12-14 03:17:21.656009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.656017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.661104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.661126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.661134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.666238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.666259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.666267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.671492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.671514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.671524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.676813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.676835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.676843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.681672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.681694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.681703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.686482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.686507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.686515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.691287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.691308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.691321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.696055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.696077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.696087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.700822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.700842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.700850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.705638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.705658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.705666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.710578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.710600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.710607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.715559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.715580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.715588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.720657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.720679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.720687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.725806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.725828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.725836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.730951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.730972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.730980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.736206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.736229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.736237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.741519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.741540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.741548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.746711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.746733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.746741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.752075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.752097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.752105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.757516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.757538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.757546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.762811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.762832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.762840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.768356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.768377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.768386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.773901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.773923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.773935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.779123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.779144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.779152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.784682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.784703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.784712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.790592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.790614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.790622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.796778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.698 [2024-12-14 03:17:21.796799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.698 [2024-12-14 03:17:21.796807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.698 [2024-12-14 03:17:21.804559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.699 [2024-12-14 03:17:21.804581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.699 [2024-12-14 03:17:21.804589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.699 [2024-12-14 03:17:21.811401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.699 [2024-12-14 03:17:21.811423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.699 [2024-12-14 03:17:21.811432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.699 [2024-12-14 03:17:21.817717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.699 [2024-12-14 03:17:21.817740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.699 [2024-12-14 03:17:21.817748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.699 [2024-12-14 03:17:21.823526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.699 [2024-12-14 03:17:21.823548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.699 [2024-12-14 03:17:21.823556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.958 [2024-12-14 03:17:21.829493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.958 [2024-12-14 03:17:21.829517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.958 [2024-12-14 03:17:21.829525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.958 [2024-12-14 03:17:21.835719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.958 [2024-12-14 03:17:21.835741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.958 [2024-12-14 03:17:21.835750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.958 [2024-12-14 03:17:21.841424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.958 [2024-12-14 03:17:21.841445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.958 [2024-12-14 03:17:21.841453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.958 [2024-12-14 03:17:21.846655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.958 [2024-12-14 03:17:21.846676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.958 [2024-12-14 03:17:21.846684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.958 [2024-12-14 03:17:21.851941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.958 [2024-12-14 03:17:21.851962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.958 [2024-12-14 03:17:21.851970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.958 [2024-12-14 03:17:21.857670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2277130) 00:36:06.958 [2024-12-14 03:17:21.857690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.958 [2024-12-14 03:17:21.857698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.958 5926.50 IOPS, 740.81 MiB/s 00:36:06.958 Latency(us) 00:36:06.958 [2024-12-14T02:17:22.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:06.958 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:06.958 nvme0n1 : 2.00 5924.93 740.62 0.00 0.00 2697.75 647.56 11172.33 00:36:06.958 [2024-12-14T02:17:22.091Z] =================================================================================================================== 00:36:06.958 [2024-12-14T02:17:22.091Z] Total : 5924.93 740.62 0.00 0.00 2697.75 647.56 11172.33 00:36:06.958 { 00:36:06.958 "results": [ 00:36:06.958 { 00:36:06.958 "job": "nvme0n1", 00:36:06.958 "core_mask": "0x2", 00:36:06.958 "workload": "randread", 00:36:06.958 "status": "finished", 00:36:06.958 "queue_depth": 16, 00:36:06.958 "io_size": 131072, 00:36:06.958 "runtime": 2.003229, 00:36:06.958 "iops": 5924.9341937442, 00:36:06.958 "mibps": 740.616774218025, 00:36:06.958 "io_failed": 0, 00:36:06.958 "io_timeout": 0, 00:36:06.958 "avg_latency_us": 2697.7514565755528, 00:36:06.958 "min_latency_us": 647.5580952380952, 00:36:06.958 "max_latency_us": 11172.327619047619 00:36:06.958 } 00:36:06.958 ], 00:36:06.958 "core_count": 1 00:36:06.958 } 00:36:06.958 03:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:06.958 03:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:06.958 03:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:06.958 | .driver_specific 00:36:06.958 | .nvme_error 00:36:06.958 | .status_code 00:36:06.958 | .command_transient_transport_error' 00:36:06.958 03:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:06.958 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 383 > 0 )) 00:36:06.958 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 386267 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 386267 ']' 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 386267 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 386267 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 386267' 00:36:07.218 killing process with pid 386267 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 386267 00:36:07.218 Received shutdown signal, test time was about 2.000000 seconds 00:36:07.218 00:36:07.218 Latency(us) 00:36:07.218 [2024-12-14T02:17:22.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:07.218 [2024-12-14T02:17:22.351Z] =================================================================================================================== 00:36:07.218 [2024-12-14T02:17:22.351Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 386267 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=386316 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 386316 /var/tmp/bperf.sock 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 386316 ']' 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:07.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:07.218 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:07.218 [2024-12-14 03:17:22.343009] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:07.218 [2024-12-14 03:17:22.343057] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid386316 ] 00:36:07.477 [2024-12-14 03:17:22.416967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:07.477 [2024-12-14 03:17:22.436372] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:07.477 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:07.477 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:07.477 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:07.477 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:07.736 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:07.736 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.736 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:07.736 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.736 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:07.736 03:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:07.994 nvme0n1 00:36:07.994 03:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:07.994 03:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.994 03:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:08.254 03:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.254 03:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:08.254 03:17:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:08.254 Running I/O for 2 seconds... 00:36:08.254 [2024-12-14 03:17:23.233921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.254 [2024-12-14 03:17:23.234042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.254 [2024-12-14 03:17:23.234072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.254 [2024-12-14 03:17:23.243477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.254 [2024-12-14 03:17:23.243600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.254 [2024-12-14 03:17:23.243622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.254 [2024-12-14 03:17:23.252927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.254 [2024-12-14 03:17:23.253038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.254 [2024-12-14 03:17:23.253057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.254 [2024-12-14 03:17:23.262293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.254 [2024-12-14 03:17:23.262406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.254 [2024-12-14 03:17:23.262425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.254 [2024-12-14 03:17:23.271657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.254 [2024-12-14 03:17:23.271767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.254 [2024-12-14 03:17:23.271785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.254 [2024-12-14 03:17:23.281013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.254 [2024-12-14 03:17:23.281120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.254 [2024-12-14 03:17:23.281138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.254 [2024-12-14 03:17:23.290361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.254 [2024-12-14 03:17:23.290471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.254 [2024-12-14 03:17:23.290489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.254 [2024-12-14 03:17:23.299707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.254 [2024-12-14 03:17:23.299814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.254 [2024-12-14 03:17:23.299832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.254 [2024-12-14 03:17:23.309002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.254 [2024-12-14 03:17:23.309108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.254 [2024-12-14 03:17:23.309125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.254 [2024-12-14 03:17:23.318358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.254 [2024-12-14 03:17:23.318464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.254 [2024-12-14 03:17:23.318483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.254 [2024-12-14 03:17:23.327627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.254 [2024-12-14 03:17:23.327733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.254 [2024-12-14 03:17:23.327753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.254 [2024-12-14 03:17:23.336940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.254 [2024-12-14 03:17:23.337044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.254 [2024-12-14 03:17:23.337062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.254 [2024-12-14 03:17:23.346264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.254 [2024-12-14 03:17:23.346378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.254 [2024-12-14 03:17:23.346396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.254 [2024-12-14 03:17:23.355674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.254 [2024-12-14 03:17:23.355777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.254 [2024-12-14 03:17:23.355796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.254 [2024-12-14 03:17:23.365011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.254 [2024-12-14 03:17:23.365118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.254 [2024-12-14 03:17:23.365137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.254 [2024-12-14 03:17:23.374332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.254 [2024-12-14 03:17:23.374438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.254 [2024-12-14 03:17:23.374456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.254 [2024-12-14 03:17:23.383747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.254 [2024-12-14 03:17:23.383855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.254 [2024-12-14 03:17:23.383873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.514 [2024-12-14 03:17:23.393291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.514 [2024-12-14 03:17:23.393416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.514 [2024-12-14 03:17:23.393433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.514 [2024-12-14 03:17:23.402640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.514 [2024-12-14 03:17:23.402746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.514 [2024-12-14 03:17:23.402763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.514 [2024-12-14 03:17:23.411977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.514 [2024-12-14 03:17:23.412088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.514 [2024-12-14 03:17:23.412106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.514 [2024-12-14 03:17:23.421295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.514 [2024-12-14 03:17:23.421410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.514 [2024-12-14 03:17:23.421428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.514 [2024-12-14 03:17:23.430626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.514 [2024-12-14 03:17:23.430732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.514 [2024-12-14 03:17:23.430750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.514 [2024-12-14 03:17:23.439942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.514 [2024-12-14 03:17:23.440051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.514 [2024-12-14 03:17:23.440070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.514 [2024-12-14 03:17:23.449278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.514 [2024-12-14 03:17:23.449390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.514 [2024-12-14 03:17:23.449408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.514 [2024-12-14 03:17:23.458596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.514 [2024-12-14 03:17:23.458702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.514 [2024-12-14 03:17:23.458719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.514 [2024-12-14 03:17:23.467927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.514 [2024-12-14 03:17:23.468035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.514 [2024-12-14 03:17:23.468052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.514 [2024-12-14 03:17:23.477259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.514 [2024-12-14 03:17:23.477373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.514 [2024-12-14 03:17:23.477392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.514 [2024-12-14 03:17:23.486589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.514 [2024-12-14 03:17:23.486696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.514 [2024-12-14 03:17:23.486713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.514 [2024-12-14 03:17:23.496165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.514 [2024-12-14 03:17:23.496271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.514 [2024-12-14 03:17:23.496290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.514 [2024-12-14 03:17:23.505719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.514 [2024-12-14 03:17:23.505826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.514 [2024-12-14 03:17:23.505844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.514 [2024-12-14 03:17:23.515073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.514 [2024-12-14 03:17:23.515179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.514 [2024-12-14 03:17:23.515197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.514 [2024-12-14 03:17:23.524399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.514 [2024-12-14 03:17:23.524510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.514 [2024-12-14 03:17:23.524528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.514 [2024-12-14 03:17:23.533895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.514 [2024-12-14 03:17:23.534000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.514 [2024-12-14 03:17:23.534018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.514 [2024-12-14 03:17:23.543216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.514 [2024-12-14 03:17:23.543325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.514 [2024-12-14 03:17:23.543344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.514 [2024-12-14 03:17:23.552552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.514 [2024-12-14 03:17:23.552656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.514 [2024-12-14 03:17:23.552674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.514 [2024-12-14 03:17:23.561885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.514 [2024-12-14 03:17:23.561992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.514 [2024-12-14 03:17:23.562009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.514 [2024-12-14 03:17:23.571201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.514 [2024-12-14 03:17:23.571304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.515 [2024-12-14 03:17:23.571327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.515 [2024-12-14 03:17:23.580540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.515 [2024-12-14 03:17:23.580646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.515 [2024-12-14 03:17:23.580664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.515 [2024-12-14 03:17:23.590068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.515 [2024-12-14 03:17:23.590176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.515 [2024-12-14 03:17:23.590193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.515 [2024-12-14 03:17:23.599606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.515 [2024-12-14 03:17:23.599712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.515 [2024-12-14 03:17:23.599731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.515 [2024-12-14 03:17:23.608924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.515 [2024-12-14 03:17:23.609032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:37 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.515 [2024-12-14 03:17:23.609050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.515 [2024-12-14 03:17:23.618246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.515 [2024-12-14 03:17:23.618356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.515 [2024-12-14 03:17:23.618374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.515 [2024-12-14 03:17:23.627565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.515 [2024-12-14 03:17:23.627674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.515 [2024-12-14 03:17:23.627692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.515 [2024-12-14 03:17:23.636891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.515 [2024-12-14 03:17:23.636997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.515 [2024-12-14 03:17:23.637016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.774 [2024-12-14 03:17:23.646525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.774 [2024-12-14 03:17:23.646636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.774 [2024-12-14 03:17:23.646655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.774 [2024-12-14 03:17:23.655978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.774 [2024-12-14 03:17:23.656082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.774 [2024-12-14 03:17:23.656103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.774 [2024-12-14 03:17:23.665293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.774 [2024-12-14 03:17:23.665405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.774 [2024-12-14 03:17:23.665424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.774 [2024-12-14 03:17:23.674622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.774 [2024-12-14 03:17:23.674726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.774 [2024-12-14 03:17:23.674744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.774 [2024-12-14 03:17:23.683942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.774 [2024-12-14 03:17:23.684047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.774 [2024-12-14 03:17:23.684064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.774 [2024-12-14 03:17:23.693244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.774 [2024-12-14 03:17:23.693357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.774 [2024-12-14 03:17:23.693374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.774 [2024-12-14 03:17:23.702554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.774 [2024-12-14 03:17:23.702660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.774 [2024-12-14 03:17:23.702678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.774 [2024-12-14 03:17:23.711879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.774 [2024-12-14 03:17:23.711985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.774 [2024-12-14 03:17:23.712003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.774 [2024-12-14 03:17:23.721190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.774 [2024-12-14 03:17:23.721296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.774 [2024-12-14 03:17:23.721319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.774 [2024-12-14 03:17:23.730502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.774 [2024-12-14 03:17:23.730609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.774 [2024-12-14 03:17:23.730628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.774 [2024-12-14 03:17:23.739837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.774 [2024-12-14 03:17:23.739948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.774 [2024-12-14 03:17:23.739965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.774 [2024-12-14 03:17:23.749400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.774 [2024-12-14 03:17:23.749507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.775 [2024-12-14 03:17:23.749525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.775 [2024-12-14 03:17:23.758796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.775 [2024-12-14 03:17:23.758903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.775 [2024-12-14 03:17:23.758921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.775 [2024-12-14 03:17:23.768176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.775 [2024-12-14 03:17:23.768282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.775 [2024-12-14 03:17:23.768300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.775 [2024-12-14 03:17:23.777508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.775 [2024-12-14 03:17:23.777616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.775 [2024-12-14 03:17:23.777634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.775 [2024-12-14 03:17:23.786946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.775 [2024-12-14 03:17:23.787053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.775 [2024-12-14 03:17:23.787071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.775 [2024-12-14 03:17:23.796282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.775 [2024-12-14 03:17:23.796397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.775 [2024-12-14 03:17:23.796414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.775 [2024-12-14 03:17:23.805616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.775 [2024-12-14 03:17:23.805724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.775 [2024-12-14 03:17:23.805742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.775 [2024-12-14 03:17:23.814938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.775 [2024-12-14 03:17:23.815044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.775 [2024-12-14 03:17:23.815061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.775 [2024-12-14 03:17:23.824253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.775 [2024-12-14 03:17:23.824365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.775 [2024-12-14 03:17:23.824383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.775 [2024-12-14 03:17:23.833597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.775 [2024-12-14 03:17:23.833703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.775 [2024-12-14 03:17:23.833721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.775 [2024-12-14 03:17:23.842921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.775 [2024-12-14 03:17:23.843027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.775 [2024-12-14 03:17:23.843044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.775 [2024-12-14 03:17:23.852229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.775 [2024-12-14 03:17:23.852337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.775 [2024-12-14 03:17:23.852355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.775 [2024-12-14 03:17:23.861552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.775 [2024-12-14 03:17:23.861657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.775 [2024-12-14 03:17:23.861674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.775 [2024-12-14 03:17:23.870888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.775 [2024-12-14 03:17:23.870993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.775 [2024-12-14 03:17:23.871010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.775 [2024-12-14 03:17:23.880200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.775 [2024-12-14 03:17:23.880305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.775 [2024-12-14 03:17:23.880328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.775 [2024-12-14 03:17:23.889501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.775 [2024-12-14 03:17:23.889610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.775 [2024-12-14 03:17:23.889627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:08.775 [2024-12-14 03:17:23.898842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:08.775 [2024-12-14 03:17:23.898950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.775 [2024-12-14 03:17:23.898971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:23.908368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:23.908479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:23.908497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:23.917808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:23.917913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:23.917931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:23.927102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:23.927209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:23.927227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:23.936452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:23.936560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:23.936577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:23.945698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:23.945804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:23.945821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:23.955065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:23.955172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:23.955190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:23.964407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:23.964513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:23.964531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:23.973684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:23.973789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:23.973806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:23.982988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:23.983098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:23.983116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:23.992202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:23.992325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:23.992342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:24.001763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:24.001880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:24.001899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:24.011107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:24.011213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:24.011231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:24.020476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:24.020582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:24.020600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:24.029796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:24.029900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:24.029917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:24.039115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:24.039222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:24.039240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:24.048425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:24.048530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:24.048547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:24.057738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:24.057844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:24.057862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:24.067056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:24.067163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:24.067181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:24.076510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:24.076617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:24.076635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:24.085828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:24.085935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:24.085952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:24.095160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:24.095267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:24.095285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:24.104485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:24.104593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:24.104610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:24.113811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:24.113918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:24.113936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:24.123122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:24.123228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:24.123244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:24.132437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:24.132548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:24.132566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.035 [2024-12-14 03:17:24.141668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.035 [2024-12-14 03:17:24.141775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.035 [2024-12-14 03:17:24.141794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.036 [2024-12-14 03:17:24.151164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.036 [2024-12-14 03:17:24.151273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.036 [2024-12-14 03:17:24.151291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.036 [2024-12-14 03:17:24.160560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.036 [2024-12-14 03:17:24.160666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.036 [2024-12-14 03:17:24.160684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.295 [2024-12-14 03:17:24.170127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.295 [2024-12-14 03:17:24.170234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.295 [2024-12-14 03:17:24.170251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.295 [2024-12-14 03:17:24.179518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.295 [2024-12-14 03:17:24.179626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.295 [2024-12-14 03:17:24.179643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.295 [2024-12-14 03:17:24.188837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.295 [2024-12-14 03:17:24.188945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.295 [2024-12-14 03:17:24.188963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.295 [2024-12-14 03:17:24.198153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.295 [2024-12-14 03:17:24.198260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.295 [2024-12-14 03:17:24.198278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.295 [2024-12-14 03:17:24.207475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.295 [2024-12-14 03:17:24.207582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.295 [2024-12-14 03:17:24.207599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.295 [2024-12-14 03:17:24.216801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.295 [2024-12-14 03:17:24.216905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.295 [2024-12-14 03:17:24.216923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.295 27043.00 IOPS, 105.64 MiB/s [2024-12-14T02:17:24.428Z] [2024-12-14 03:17:24.226111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.295 [2024-12-14 03:17:24.226220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.295 [2024-12-14 03:17:24.226239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.295 [2024-12-14 03:17:24.235419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.295 [2024-12-14 03:17:24.235526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.295 [2024-12-14 03:17:24.235543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.295 [2024-12-14 03:17:24.244771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.295 [2024-12-14 03:17:24.244876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.295 [2024-12-14 03:17:24.244893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.295 [2024-12-14 03:17:24.254280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.295 [2024-12-14 03:17:24.254411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.295 [2024-12-14 03:17:24.254429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.295 [2024-12-14 03:17:24.263678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.295 [2024-12-14 03:17:24.263785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.295 [2024-12-14 03:17:24.263802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.295 [2024-12-14 03:17:24.273008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.295 [2024-12-14 03:17:24.273117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.295 [2024-12-14 03:17:24.273135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.295 [2024-12-14 03:17:24.282326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.295 [2024-12-14 03:17:24.282439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.295 [2024-12-14 03:17:24.282457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.295 [2024-12-14 03:17:24.291643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.295 [2024-12-14 03:17:24.291751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.295 [2024-12-14 03:17:24.291768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.295 [2024-12-14 03:17:24.300995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.295 [2024-12-14 03:17:24.301100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.295 [2024-12-14 03:17:24.301117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.295 [2024-12-14 03:17:24.310327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.295 [2024-12-14 03:17:24.310436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.295 [2024-12-14 03:17:24.310455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.295 [2024-12-14 03:17:24.319649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.295 [2024-12-14 03:17:24.319756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.295 [2024-12-14 03:17:24.319773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.295 [2024-12-14 03:17:24.328962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.295 [2024-12-14 03:17:24.329068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.295 [2024-12-14 03:17:24.329088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.295 [2024-12-14 03:17:24.338280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.295 [2024-12-14 03:17:24.338391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.296 [2024-12-14 03:17:24.338409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.296 [2024-12-14 03:17:24.347585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.296 [2024-12-14 03:17:24.347691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.296 [2024-12-14 03:17:24.347709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.296 [2024-12-14 03:17:24.356898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.296 [2024-12-14 03:17:24.357005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.296 [2024-12-14 03:17:24.357023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.296 [2024-12-14 03:17:24.366211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.296 [2024-12-14 03:17:24.366333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.296 [2024-12-14 03:17:24.366350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.296 [2024-12-14 03:17:24.375549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.296 [2024-12-14 03:17:24.375658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.296 [2024-12-14 03:17:24.375676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.296 [2024-12-14 03:17:24.384817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.296 [2024-12-14 03:17:24.384921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.296 [2024-12-14 03:17:24.384941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.296 [2024-12-14 03:17:24.394137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.296 [2024-12-14 03:17:24.394245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.296 [2024-12-14 03:17:24.394262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.296 [2024-12-14 03:17:24.403469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.296 [2024-12-14 03:17:24.403576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.296 [2024-12-14 03:17:24.403594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.296 [2024-12-14 03:17:24.412750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.296 [2024-12-14 03:17:24.412855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.296 [2024-12-14 03:17:24.412872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.296 [2024-12-14 03:17:24.422068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.296 [2024-12-14 03:17:24.422175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.296 [2024-12-14 03:17:24.422193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.555 [2024-12-14 03:17:24.431658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.555 [2024-12-14 03:17:24.431764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.555 [2024-12-14 03:17:24.431781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.555 [2024-12-14 03:17:24.441038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.555 [2024-12-14 03:17:24.441146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.555 [2024-12-14 03:17:24.441163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.555 [2024-12-14 03:17:24.450353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.555 [2024-12-14 03:17:24.450463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.555 [2024-12-14 03:17:24.450481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.555 [2024-12-14 03:17:24.459672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.555 [2024-12-14 03:17:24.459778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.555 [2024-12-14 03:17:24.459796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.555 [2024-12-14 03:17:24.468972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.555 [2024-12-14 03:17:24.469082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.555 [2024-12-14 03:17:24.469101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.555 [2024-12-14 03:17:24.478303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.555 [2024-12-14 03:17:24.478419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.555 [2024-12-14 03:17:24.478436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.555 [2024-12-14 03:17:24.487643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.555 [2024-12-14 03:17:24.487751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.555 [2024-12-14 03:17:24.487769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.555 [2024-12-14 03:17:24.496933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.555 [2024-12-14 03:17:24.497039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.555 [2024-12-14 03:17:24.497058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.555 [2024-12-14 03:17:24.506429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.555 [2024-12-14 03:17:24.506535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.555 [2024-12-14 03:17:24.506553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.555 [2024-12-14 03:17:24.515912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.555 [2024-12-14 03:17:24.516020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.555 [2024-12-14 03:17:24.516038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.556 [2024-12-14 03:17:24.525283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.556 [2024-12-14 03:17:24.525401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.556 [2024-12-14 03:17:24.525419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.556 [2024-12-14 03:17:24.534667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.556 [2024-12-14 03:17:24.534772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.556 [2024-12-14 03:17:24.534791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.556 [2024-12-14 03:17:24.543993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.556 [2024-12-14 03:17:24.544101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.556 [2024-12-14 03:17:24.544119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.556 [2024-12-14 03:17:24.553317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.556 [2024-12-14 03:17:24.553424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.556 [2024-12-14 03:17:24.553442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.556 [2024-12-14 03:17:24.562644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.556 [2024-12-14 03:17:24.562749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.556 [2024-12-14 03:17:24.562767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.556 [2024-12-14 03:17:24.571969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.556 [2024-12-14 03:17:24.572075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.556 [2024-12-14 03:17:24.572093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.556 [2024-12-14 03:17:24.581286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.556 [2024-12-14 03:17:24.581424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.556 [2024-12-14 03:17:24.581442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.556 [2024-12-14 03:17:24.590815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.556 [2024-12-14 03:17:24.590922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.556 [2024-12-14 03:17:24.590940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.556 [2024-12-14 03:17:24.600164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.556 [2024-12-14 03:17:24.600269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.556 [2024-12-14 03:17:24.600287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.556 [2024-12-14 03:17:24.609528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.556 [2024-12-14 03:17:24.609635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.556 [2024-12-14 03:17:24.609654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.556 [2024-12-14 03:17:24.618835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.556 [2024-12-14 03:17:24.618942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.556 [2024-12-14 03:17:24.618959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.556 [2024-12-14 03:17:24.628170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.556 [2024-12-14 03:17:24.628275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.556 [2024-12-14 03:17:24.628297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.556 [2024-12-14 03:17:24.637655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.556 [2024-12-14 03:17:24.637762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.556 [2024-12-14 03:17:24.637780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.556 [2024-12-14 03:17:24.646964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.556 [2024-12-14 03:17:24.647072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.556 [2024-12-14 03:17:24.647090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.556 [2024-12-14 03:17:24.656264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.556 [2024-12-14 03:17:24.656379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.556 [2024-12-14 03:17:24.656398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.556 [2024-12-14 03:17:24.665638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.556 [2024-12-14 03:17:24.665757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.556 [2024-12-14 03:17:24.665773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.556 [2024-12-14 03:17:24.674930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.556 [2024-12-14 03:17:24.675037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.556 [2024-12-14 03:17:24.675053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.556 [2024-12-14 03:17:24.684336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.556 [2024-12-14 03:17:24.684448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.556 [2024-12-14 03:17:24.684466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.815 [2024-12-14 03:17:24.693892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.815 [2024-12-14 03:17:24.693997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.815 [2024-12-14 03:17:24.694015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.815 [2024-12-14 03:17:24.703229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.815 [2024-12-14 03:17:24.703342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.815 [2024-12-14 03:17:24.703361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.815 [2024-12-14 03:17:24.712570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.712682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.712699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.721785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.721890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.721907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.731107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.731213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.731231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.740463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.740571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.740588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.749739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.749846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.749862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.759233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.759360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.759378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.768627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.768733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.768751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.777944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.778052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.778068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.787250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.787362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.787378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.796588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.796696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.796714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.805946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.806053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.806071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.815282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.815395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.815414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.824556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.824662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.824679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.833890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.833996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.834015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.843218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.843326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.843344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.852537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.852642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.852659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.861878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.861987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.862005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.871217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.871327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.871348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.880557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.880663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.880681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.889890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.889997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.890015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.899218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.899327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.899345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.908534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.908641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.908659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.917911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.918018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.918034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.927165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.927271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.927287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.936496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.936602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.936620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:09.816 [2024-12-14 03:17:24.945921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:09.816 [2024-12-14 03:17:24.946029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-12-14 03:17:24.946047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:24.955420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:24.955532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:24.955549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:24.964747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:24.964853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:24.964871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:24.974067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:24.974175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:24.974193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:24.983388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:24.983495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:24.983511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:24.992719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:24.992826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:24.992844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:25.002076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:25.002194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:25.002211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:25.011665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:25.011772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:25.011790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:25.021048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:25.021155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:25.021173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:25.030376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:25.030484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:25.030502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:25.039740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:25.039848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:25.039866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:25.049044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:25.049151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:25.049168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:25.058392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:25.058500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:25.058517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:25.067741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:25.067846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:25.067864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:25.077061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:25.077168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:25.077185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:25.086396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:25.086503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:25.086522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:25.095729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:25.095835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:25.095854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:25.105066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:25.105173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:25.105191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:25.114419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:25.114534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:25.114555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:25.123754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:25.123862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:25.123879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:25.133081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:25.133187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:25.133204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:25.142482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:25.142590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:25.142607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:25.151840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:25.151952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:25.151970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:25.161209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:25.161318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:25.161337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:25.170696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.076 [2024-12-14 03:17:25.170800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.076 [2024-12-14 03:17:25.170817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.076 [2024-12-14 03:17:25.180014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.077 [2024-12-14 03:17:25.180122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.077 [2024-12-14 03:17:25.180140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.077 [2024-12-14 03:17:25.189350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.077 [2024-12-14 03:17:25.189459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.077 [2024-12-14 03:17:25.189477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.077 [2024-12-14 03:17:25.198688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.077 [2024-12-14 03:17:25.198797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.077 [2024-12-14 03:17:25.198815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.335 [2024-12-14 03:17:25.208123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.335 [2024-12-14 03:17:25.208229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.335 [2024-12-14 03:17:25.208246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.335 [2024-12-14 03:17:25.217602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1015dc0) with pdu=0x200016efe720 00:36:10.335 [2024-12-14 03:17:25.217708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.335 [2024-12-14 03:17:25.217725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:10.335 27188.00 IOPS, 106.20 MiB/s 00:36:10.335 Latency(us) 00:36:10.335 [2024-12-14T02:17:25.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:10.335 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:10.335 nvme0n1 : 2.01 27191.63 106.22 0.00 0.00 4699.33 3432.84 14293.09 00:36:10.335 [2024-12-14T02:17:25.468Z] =================================================================================================================== 00:36:10.335 [2024-12-14T02:17:25.468Z] Total : 27191.63 106.22 0.00 0.00 4699.33 3432.84 14293.09 00:36:10.335 { 00:36:10.335 "results": [ 00:36:10.335 { 00:36:10.335 "job": "nvme0n1", 00:36:10.335 "core_mask": "0x2", 00:36:10.335 "workload": "randwrite", 00:36:10.335 "status": "finished", 00:36:10.335 "queue_depth": 128, 00:36:10.335 "io_size": 4096, 00:36:10.335 "runtime": 2.005617, 00:36:10.335 "iops": 27191.63230068353, 00:36:10.335 "mibps": 106.21731367454504, 00:36:10.335 "io_failed": 0, 00:36:10.335 "io_timeout": 0, 00:36:10.335 "avg_latency_us": 4699.326891734249, 00:36:10.335 "min_latency_us": 3432.8380952380953, 00:36:10.335 "max_latency_us": 14293.089523809524 00:36:10.335 } 00:36:10.335 ], 00:36:10.335 "core_count": 1 00:36:10.335 } 00:36:10.335 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:10.335 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:10.335 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:10.335 | .driver_specific 00:36:10.335 | .nvme_error 00:36:10.335 | .status_code 00:36:10.335 | .command_transient_transport_error' 00:36:10.335 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:10.335 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 213 > 0 )) 00:36:10.335 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 386316 00:36:10.335 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 386316 ']' 00:36:10.335 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 386316 00:36:10.335 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:10.335 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:10.335 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 386316 00:36:10.594 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:10.594 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:10.594 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 386316' 00:36:10.594 killing process with pid 386316 00:36:10.594 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 386316 00:36:10.594 Received shutdown signal, test time was about 2.000000 seconds 00:36:10.594 00:36:10.594 Latency(us) 00:36:10.594 [2024-12-14T02:17:25.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:10.594 [2024-12-14T02:17:25.727Z] =================================================================================================================== 00:36:10.594 [2024-12-14T02:17:25.727Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:10.594 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 386316 00:36:10.594 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:10.594 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:10.594 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:10.594 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:10.594 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:10.594 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=386374 00:36:10.594 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:10.594 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 386374 /var/tmp/bperf.sock 00:36:10.594 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 386374 ']' 00:36:10.594 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:10.594 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:10.594 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:10.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:10.594 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:10.594 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:10.594 [2024-12-14 03:17:25.690768] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:10.594 [2024-12-14 03:17:25.690814] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid386374 ] 00:36:10.594 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:10.594 Zero copy mechanism will not be used. 00:36:10.853 [2024-12-14 03:17:25.763965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:10.853 [2024-12-14 03:17:25.785831] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:10.853 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:10.853 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:10.853 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:10.853 03:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:11.111 03:17:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:11.111 03:17:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.111 03:17:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:11.111 03:17:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.111 03:17:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:11.111 03:17:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:11.370 nvme0n1 00:36:11.370 03:17:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:11.370 03:17:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.370 03:17:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:11.370 03:17:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.370 03:17:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:11.370 03:17:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:11.370 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:11.370 Zero copy mechanism will not be used. 00:36:11.370 Running I/O for 2 seconds... 00:36:11.370 [2024-12-14 03:17:26.496556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.370 [2024-12-14 03:17:26.496631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.370 [2024-12-14 03:17:26.496658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.370 [2024-12-14 03:17:26.501049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.370 [2024-12-14 03:17:26.501115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.370 [2024-12-14 03:17:26.501136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.630 [2024-12-14 03:17:26.505395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.630 [2024-12-14 03:17:26.505460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.630 [2024-12-14 03:17:26.505480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.630 [2024-12-14 03:17:26.509655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.630 [2024-12-14 03:17:26.509726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.630 [2024-12-14 03:17:26.509745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.630 [2024-12-14 03:17:26.513850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.630 [2024-12-14 03:17:26.513923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.630 [2024-12-14 03:17:26.513941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.630 [2024-12-14 03:17:26.518128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.630 [2024-12-14 03:17:26.518199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.630 [2024-12-14 03:17:26.518217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.630 [2024-12-14 03:17:26.522296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.630 [2024-12-14 03:17:26.522528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.630 [2024-12-14 03:17:26.522547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.630 [2024-12-14 03:17:26.526612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.630 [2024-12-14 03:17:26.526668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.630 [2024-12-14 03:17:26.526687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.630 [2024-12-14 03:17:26.530730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.630 [2024-12-14 03:17:26.530799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.630 [2024-12-14 03:17:26.530818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.630 [2024-12-14 03:17:26.534769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.630 [2024-12-14 03:17:26.534825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.630 [2024-12-14 03:17:26.534845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.630 [2024-12-14 03:17:26.538824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.630 [2024-12-14 03:17:26.538881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.630 [2024-12-14 03:17:26.538900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.630 [2024-12-14 03:17:26.542897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.630 [2024-12-14 03:17:26.542952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.630 [2024-12-14 03:17:26.542970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.630 [2024-12-14 03:17:26.547034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.630 [2024-12-14 03:17:26.547088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.630 [2024-12-14 03:17:26.547106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.630 [2024-12-14 03:17:26.551117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.630 [2024-12-14 03:17:26.551168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.630 [2024-12-14 03:17:26.551189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.630 [2024-12-14 03:17:26.555267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.630 [2024-12-14 03:17:26.555331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.630 [2024-12-14 03:17:26.555349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.630 [2024-12-14 03:17:26.559372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.630 [2024-12-14 03:17:26.559425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.630 [2024-12-14 03:17:26.559443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.630 [2024-12-14 03:17:26.563465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.630 [2024-12-14 03:17:26.563535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.630 [2024-12-14 03:17:26.563553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.630 [2024-12-14 03:17:26.567477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.630 [2024-12-14 03:17:26.567540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.630 [2024-12-14 03:17:26.567558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.630 [2024-12-14 03:17:26.571543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.630 [2024-12-14 03:17:26.571628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.630 [2024-12-14 03:17:26.571647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.630 [2024-12-14 03:17:26.575593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.630 [2024-12-14 03:17:26.575664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.630 [2024-12-14 03:17:26.575682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.630 [2024-12-14 03:17:26.579684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.630 [2024-12-14 03:17:26.579740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.630 [2024-12-14 03:17:26.579759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.630 [2024-12-14 03:17:26.583777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.630 [2024-12-14 03:17:26.583830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.630 [2024-12-14 03:17:26.583847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.630 [2024-12-14 03:17:26.588036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.630 [2024-12-14 03:17:26.588094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.630 [2024-12-14 03:17:26.588113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.630 [2024-12-14 03:17:26.592063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.592128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.592145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.596154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.596208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.596225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.600188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.600248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.600267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.604702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.604871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.604890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.610374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.610534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.610553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.616454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.616557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.616577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.621462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.621543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.621562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.626557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.626660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.626679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.631444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.631503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.631521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.635672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.635745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.635763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.639999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.640075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.640092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.644305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.644369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.644386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.648490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.648545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.648562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.652831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.652883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.652900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.657043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.657109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.657127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.661472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.661559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.661578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.666095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.666157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.666177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.671020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.671155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.671174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.676157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.676222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.676239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.681259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.681320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.681339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.685954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.686036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.686054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.690512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.690585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.690603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.695419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.695481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.695498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.700939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.700998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.701016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.705786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.705856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.705874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.710371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.710429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.710446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.715214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.715306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.631 [2024-12-14 03:17:26.715332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.631 [2024-12-14 03:17:26.721736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.631 [2024-12-14 03:17:26.721798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.632 [2024-12-14 03:17:26.721815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.632 [2024-12-14 03:17:26.726811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.632 [2024-12-14 03:17:26.726861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.632 [2024-12-14 03:17:26.726879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.632 [2024-12-14 03:17:26.731523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.632 [2024-12-14 03:17:26.731624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.632 [2024-12-14 03:17:26.731643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.632 [2024-12-14 03:17:26.737658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.632 [2024-12-14 03:17:26.737725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.632 [2024-12-14 03:17:26.737743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.632 [2024-12-14 03:17:26.743575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.632 [2024-12-14 03:17:26.743640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.632 [2024-12-14 03:17:26.743658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.632 [2024-12-14 03:17:26.749266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.632 [2024-12-14 03:17:26.749328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.632 [2024-12-14 03:17:26.749346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.632 [2024-12-14 03:17:26.754049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.632 [2024-12-14 03:17:26.754117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.632 [2024-12-14 03:17:26.754135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.632 [2024-12-14 03:17:26.759391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.632 [2024-12-14 03:17:26.759451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.632 [2024-12-14 03:17:26.759469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.892 [2024-12-14 03:17:26.764294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.892 [2024-12-14 03:17:26.764369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.892 [2024-12-14 03:17:26.764387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.892 [2024-12-14 03:17:26.769246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.892 [2024-12-14 03:17:26.769308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.892 [2024-12-14 03:17:26.769331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.892 [2024-12-14 03:17:26.774729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.892 [2024-12-14 03:17:26.774857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.892 [2024-12-14 03:17:26.774876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.892 [2024-12-14 03:17:26.779909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.892 [2024-12-14 03:17:26.780006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.892 [2024-12-14 03:17:26.780025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.892 [2024-12-14 03:17:26.784809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.892 [2024-12-14 03:17:26.784883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.892 [2024-12-14 03:17:26.784902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.892 [2024-12-14 03:17:26.789585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.892 [2024-12-14 03:17:26.789656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.892 [2024-12-14 03:17:26.789673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.892 [2024-12-14 03:17:26.793831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.892 [2024-12-14 03:17:26.794051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.892 [2024-12-14 03:17:26.794070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.892 [2024-12-14 03:17:26.798446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.892 [2024-12-14 03:17:26.798700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.892 [2024-12-14 03:17:26.798723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.892 [2024-12-14 03:17:26.803318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.892 [2024-12-14 03:17:26.803567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.892 [2024-12-14 03:17:26.803586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.892 [2024-12-14 03:17:26.808227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.892 [2024-12-14 03:17:26.808476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.892 [2024-12-14 03:17:26.808496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.892 [2024-12-14 03:17:26.812900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.892 [2024-12-14 03:17:26.813134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.892 [2024-12-14 03:17:26.813153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.892 [2024-12-14 03:17:26.817817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.892 [2024-12-14 03:17:26.818060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.892 [2024-12-14 03:17:26.818079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.892 [2024-12-14 03:17:26.822485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.892 [2024-12-14 03:17:26.822704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.892 [2024-12-14 03:17:26.822723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.892 [2024-12-14 03:17:26.827161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.827407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.827426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.831939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.832176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.832195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.836961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.837181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.837200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.841872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.842107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.842126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.846266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.846494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.846513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.850605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.850852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.850871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.854735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.854979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.854997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.858816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.859068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.859086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.863021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.863261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.863280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.867474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.867709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.867728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.871883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.872119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.872138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.876166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.876410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.876428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.880626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.880877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.880896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.884951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.885200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.885218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.889055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.889297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.889320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.893267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.893508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.893527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.897685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.897929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.897948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.902663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.902900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.902918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.907837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.908073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.908092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.912733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.912965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.912985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.917384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.917617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.917639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.922038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.922282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.922301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.926739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.926985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.927003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.931336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.931570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.931589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.935683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.935919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.935938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.939971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.940215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.940234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.943963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.944182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.944201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.947925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.893 [2024-12-14 03:17:26.948170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.893 [2024-12-14 03:17:26.948189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.893 [2024-12-14 03:17:26.951922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.894 [2024-12-14 03:17:26.952166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.894 [2024-12-14 03:17:26.952185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.894 [2024-12-14 03:17:26.955929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.894 [2024-12-14 03:17:26.956181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.894 [2024-12-14 03:17:26.956199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.894 [2024-12-14 03:17:26.959856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.894 [2024-12-14 03:17:26.960102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.894 [2024-12-14 03:17:26.960120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.894 [2024-12-14 03:17:26.963746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.894 [2024-12-14 03:17:26.963992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.894 [2024-12-14 03:17:26.964011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.894 [2024-12-14 03:17:26.967694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.894 [2024-12-14 03:17:26.967936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.894 [2024-12-14 03:17:26.967955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.894 [2024-12-14 03:17:26.971623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.894 [2024-12-14 03:17:26.971873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.894 [2024-12-14 03:17:26.971891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.894 [2024-12-14 03:17:26.975521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.894 [2024-12-14 03:17:26.975769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.894 [2024-12-14 03:17:26.975788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.894 [2024-12-14 03:17:26.979451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.894 [2024-12-14 03:17:26.979701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.894 [2024-12-14 03:17:26.979721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.894 [2024-12-14 03:17:26.983369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.894 [2024-12-14 03:17:26.983618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.894 [2024-12-14 03:17:26.983638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.894 [2024-12-14 03:17:26.987511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.894 [2024-12-14 03:17:26.987776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.894 [2024-12-14 03:17:26.987796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.894 [2024-12-14 03:17:26.992131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.894 [2024-12-14 03:17:26.992386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.894 [2024-12-14 03:17:26.992405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.894 [2024-12-14 03:17:26.996106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.894 [2024-12-14 03:17:26.996358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.894 [2024-12-14 03:17:26.996377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.894 [2024-12-14 03:17:27.000013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.894 [2024-12-14 03:17:27.000278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.894 [2024-12-14 03:17:27.000298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.894 [2024-12-14 03:17:27.003934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.894 [2024-12-14 03:17:27.004148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.894 [2024-12-14 03:17:27.004168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.894 [2024-12-14 03:17:27.007869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.894 [2024-12-14 03:17:27.008090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.894 [2024-12-14 03:17:27.008108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.894 [2024-12-14 03:17:27.012043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.894 [2024-12-14 03:17:27.012261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.894 [2024-12-14 03:17:27.012280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.894 [2024-12-14 03:17:27.017061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.894 [2024-12-14 03:17:27.017256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.894 [2024-12-14 03:17:27.017274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.894 [2024-12-14 03:17:27.021650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:11.894 [2024-12-14 03:17:27.021852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.894 [2024-12-14 03:17:27.021871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.155 [2024-12-14 03:17:27.025896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.155 [2024-12-14 03:17:27.026084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.155 [2024-12-14 03:17:27.026105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.155 [2024-12-14 03:17:27.030164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.155 [2024-12-14 03:17:27.030381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.155 [2024-12-14 03:17:27.030399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.155 [2024-12-14 03:17:27.034280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.155 [2024-12-14 03:17:27.034500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.155 [2024-12-14 03:17:27.034519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.155 [2024-12-14 03:17:27.038205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.155 [2024-12-14 03:17:27.038412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.155 [2024-12-14 03:17:27.038430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.155 [2024-12-14 03:17:27.042144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.155 [2024-12-14 03:17:27.042356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.155 [2024-12-14 03:17:27.042374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.155 [2024-12-14 03:17:27.046058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.155 [2024-12-14 03:17:27.046277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.155 [2024-12-14 03:17:27.046295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.155 [2024-12-14 03:17:27.049647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.155 [2024-12-14 03:17:27.049856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.155 [2024-12-14 03:17:27.049874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.155 [2024-12-14 03:17:27.053192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.155 [2024-12-14 03:17:27.053394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.155 [2024-12-14 03:17:27.053413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.155 [2024-12-14 03:17:27.056838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.155 [2024-12-14 03:17:27.057041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.155 [2024-12-14 03:17:27.057059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.155 [2024-12-14 03:17:27.060860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.155 [2024-12-14 03:17:27.061055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.155 [2024-12-14 03:17:27.061074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.155 [2024-12-14 03:17:27.065647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.155 [2024-12-14 03:17:27.065884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.155 [2024-12-14 03:17:27.065903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.155 [2024-12-14 03:17:27.070805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.155 [2024-12-14 03:17:27.070969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.155 [2024-12-14 03:17:27.070987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.155 [2024-12-14 03:17:27.075880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.155 [2024-12-14 03:17:27.076135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.155 [2024-12-14 03:17:27.076154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.155 [2024-12-14 03:17:27.081456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.155 [2024-12-14 03:17:27.081659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.155 [2024-12-14 03:17:27.081678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.155 [2024-12-14 03:17:27.087278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.155 [2024-12-14 03:17:27.087443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.155 [2024-12-14 03:17:27.087462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.155 [2024-12-14 03:17:27.092869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.155 [2024-12-14 03:17:27.092950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.155 [2024-12-14 03:17:27.092969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.155 [2024-12-14 03:17:27.098544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.155 [2024-12-14 03:17:27.098733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.155 [2024-12-14 03:17:27.098752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.155 [2024-12-14 03:17:27.103706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.155 [2024-12-14 03:17:27.103898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.155 [2024-12-14 03:17:27.103917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.155 [2024-12-14 03:17:27.108254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.155 [2024-12-14 03:17:27.108466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.155 [2024-12-14 03:17:27.108485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.155 [2024-12-14 03:17:27.112464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.155 [2024-12-14 03:17:27.112660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.155 [2024-12-14 03:17:27.112679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.155 [2024-12-14 03:17:27.116375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.116559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.116578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.120168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.120372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.120391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.123872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.124074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.124092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.127529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.127735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.127753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.131163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.131375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.131393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.134767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.134967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.134985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.138587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.138784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.138806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.142783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.142968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.142987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.146430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.146621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.146639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.150005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.150197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.150216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.153610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.153804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.153822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.157411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.157602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.157621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.161230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.161411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.161430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.164986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.165167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.165186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.169338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.169525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.169544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.173683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.173876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.173895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.178006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.178156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.178175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.182456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.182634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.182652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.187450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.187629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.187647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.191379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.191572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.191590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.195048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.195253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.195271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.198675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.198874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.198892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.202329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.202519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.202538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.205936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.206138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.206156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.209549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.209744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.209762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.213158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.213352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.213370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.216730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.216926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.216944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.220397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.156 [2024-12-14 03:17:27.220581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.156 [2024-12-14 03:17:27.220600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.156 [2024-12-14 03:17:27.224091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.157 [2024-12-14 03:17:27.224294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.157 [2024-12-14 03:17:27.224319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.157 [2024-12-14 03:17:27.227797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.157 [2024-12-14 03:17:27.227990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.157 [2024-12-14 03:17:27.228009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.157 [2024-12-14 03:17:27.231652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.157 [2024-12-14 03:17:27.231853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.157 [2024-12-14 03:17:27.231871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.157 [2024-12-14 03:17:27.235823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.157 [2024-12-14 03:17:27.236018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.157 [2024-12-14 03:17:27.236036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.157 [2024-12-14 03:17:27.240160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.157 [2024-12-14 03:17:27.240353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.157 [2024-12-14 03:17:27.240374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.157 [2024-12-14 03:17:27.244552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.157 [2024-12-14 03:17:27.244750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.157 [2024-12-14 03:17:27.244768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.157 [2024-12-14 03:17:27.249604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.157 [2024-12-14 03:17:27.249799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.157 [2024-12-14 03:17:27.249818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.157 [2024-12-14 03:17:27.253751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.157 [2024-12-14 03:17:27.253931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.157 [2024-12-14 03:17:27.253950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.157 [2024-12-14 03:17:27.257675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.157 [2024-12-14 03:17:27.257863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.157 [2024-12-14 03:17:27.257882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.157 [2024-12-14 03:17:27.261356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.157 [2024-12-14 03:17:27.261553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.157 [2024-12-14 03:17:27.261572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.157 [2024-12-14 03:17:27.265012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.157 [2024-12-14 03:17:27.265217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.157 [2024-12-14 03:17:27.265236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.157 [2024-12-14 03:17:27.268658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.157 [2024-12-14 03:17:27.268851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.157 [2024-12-14 03:17:27.268870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.157 [2024-12-14 03:17:27.272522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.157 [2024-12-14 03:17:27.272725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.157 [2024-12-14 03:17:27.272744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.157 [2024-12-14 03:17:27.276508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.157 [2024-12-14 03:17:27.276705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.157 [2024-12-14 03:17:27.276723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.157 [2024-12-14 03:17:27.280773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.157 [2024-12-14 03:17:27.280965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.157 [2024-12-14 03:17:27.280984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.157 [2024-12-14 03:17:27.285061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.157 [2024-12-14 03:17:27.285256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.157 [2024-12-14 03:17:27.285274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.289802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.289986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.290004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.294331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.294539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.294558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.298226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.298444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.298463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.301995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.302185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.302204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.305674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.305860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.305879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.309344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.309530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.309548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.313040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.313238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.313256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.316740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.316933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.316951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.320472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.320660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.320679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.324078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.324279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.324297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.327673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.327873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.327891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.331260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.331467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.331487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.334931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.335118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.335136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.338895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.339089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.339107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.343509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.343673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.343695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.347848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.348012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.348030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.351850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.352046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.352064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.355744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.355930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.355948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.359825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.360023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.360041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.363858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.364044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.364063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.367946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.368156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.368174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.372078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.372288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.372306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.375856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.376063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.376081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.379629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.379818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.379836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.383447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.383646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.383665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.387136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.387337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.387355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.391054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.391238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.391255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.395505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.395652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.395671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.399688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.399876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.399894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.403441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.403653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.403671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.407430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.407624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.407642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.411322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.411519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.411538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.415079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.415286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.415305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.417 [2024-12-14 03:17:27.418835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.417 [2024-12-14 03:17:27.419033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.417 [2024-12-14 03:17:27.419051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.422826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.423015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.423034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.426608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.426788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.426806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.430435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.430633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.430651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.434225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.434428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.434446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.438076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.438274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.438293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.442057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.442245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.442263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.445596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.445787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.445812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.449158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.449362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.449380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.452696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.452892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.452909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.456429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.456621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.456640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.460759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.460948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.460967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.465024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.465206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.465225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.469448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.469621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.469639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.474200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.474397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.474415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.478711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.478892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.478911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.483771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.483952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.483970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.487770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.487959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.487977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.418 7186.00 IOPS, 898.25 MiB/s [2024-12-14T02:17:27.551Z] [2024-12-14 03:17:27.492867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.492967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.492987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.497570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.497717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.497735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.503565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.503773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.503792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.509790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.509943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.509962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.514640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.514801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.514821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.518986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.519204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.519223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.523330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.523506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.523525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.527412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.527598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.527618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.531590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.531792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.531812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.535761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.535977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.535997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.539825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.539973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.539992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.544081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.544228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.544247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.418 [2024-12-14 03:17:27.547931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.418 [2024-12-14 03:17:27.548095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.418 [2024-12-14 03:17:27.548114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.678 [2024-12-14 03:17:27.552763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.678 [2024-12-14 03:17:27.552979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.678 [2024-12-14 03:17:27.552998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.678 [2024-12-14 03:17:27.558165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.678 [2024-12-14 03:17:27.558386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.678 [2024-12-14 03:17:27.558404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.678 [2024-12-14 03:17:27.562763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.678 [2024-12-14 03:17:27.562941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.678 [2024-12-14 03:17:27.562963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.678 [2024-12-14 03:17:27.567539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.678 [2024-12-14 03:17:27.567732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.678 [2024-12-14 03:17:27.567751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.678 [2024-12-14 03:17:27.571593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.678 [2024-12-14 03:17:27.571727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.678 [2024-12-14 03:17:27.571746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.678 [2024-12-14 03:17:27.575446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.678 [2024-12-14 03:17:27.575586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.678 [2024-12-14 03:17:27.575604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.678 [2024-12-14 03:17:27.580021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.678 [2024-12-14 03:17:27.580222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.678 [2024-12-14 03:17:27.580242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.678 [2024-12-14 03:17:27.585077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.678 [2024-12-14 03:17:27.585292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.678 [2024-12-14 03:17:27.585318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.678 [2024-12-14 03:17:27.589416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.678 [2024-12-14 03:17:27.589658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.678 [2024-12-14 03:17:27.589678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.678 [2024-12-14 03:17:27.594486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.678 [2024-12-14 03:17:27.594654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.678 [2024-12-14 03:17:27.594674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.678 [2024-12-14 03:17:27.599835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.678 [2024-12-14 03:17:27.600053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.678 [2024-12-14 03:17:27.600073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.678 [2024-12-14 03:17:27.605392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.678 [2024-12-14 03:17:27.605616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.678 [2024-12-14 03:17:27.605635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.678 [2024-12-14 03:17:27.611006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.678 [2024-12-14 03:17:27.611159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.678 [2024-12-14 03:17:27.611178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.678 [2024-12-14 03:17:27.617287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.678 [2024-12-14 03:17:27.617483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.678 [2024-12-14 03:17:27.617503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.678 [2024-12-14 03:17:27.623075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.678 [2024-12-14 03:17:27.623563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.678 [2024-12-14 03:17:27.623582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.678 [2024-12-14 03:17:27.629723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.678 [2024-12-14 03:17:27.629862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.678 [2024-12-14 03:17:27.629881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.678 [2024-12-14 03:17:27.635716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.678 [2024-12-14 03:17:27.635951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.678 [2024-12-14 03:17:27.635970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.642044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.642185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.642204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.648342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.648495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.648514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.654445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.654650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.654670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.660863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.661086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.661106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.666973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.667157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.667177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.673349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.673646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.673666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.679933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.680167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.680187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.685667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.685880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.685900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.690954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.691238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.691258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.695654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.695863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.695883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.699360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.699527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.699547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.702981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.703152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.703176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.706667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.706835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.706855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.710247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.710422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.710441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.713849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.714020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.714039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.717484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.717657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.717677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.721027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.721191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.721210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.724575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.724742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.724762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.728118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.728291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.728311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.731656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.731825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.731844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.735192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.735367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.735386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.738800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.738972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.738992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.742371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.742537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.742556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.745906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.746072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.746092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.749469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.749637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.749657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.753046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.753216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.679 [2024-12-14 03:17:27.753235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.679 [2024-12-14 03:17:27.756857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.679 [2024-12-14 03:17:27.757038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.680 [2024-12-14 03:17:27.757058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.680 [2024-12-14 03:17:27.761792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.680 [2024-12-14 03:17:27.761964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.680 [2024-12-14 03:17:27.761984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.680 [2024-12-14 03:17:27.766146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.680 [2024-12-14 03:17:27.766321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.680 [2024-12-14 03:17:27.766341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.680 [2024-12-14 03:17:27.770063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.680 [2024-12-14 03:17:27.770227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.680 [2024-12-14 03:17:27.770246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.680 [2024-12-14 03:17:27.773919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.680 [2024-12-14 03:17:27.774082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.680 [2024-12-14 03:17:27.774102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.680 [2024-12-14 03:17:27.777755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.680 [2024-12-14 03:17:27.777922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.680 [2024-12-14 03:17:27.777941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.680 [2024-12-14 03:17:27.781771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.680 [2024-12-14 03:17:27.781956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.680 [2024-12-14 03:17:27.781976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.680 [2024-12-14 03:17:27.786407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.680 [2024-12-14 03:17:27.786573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.680 [2024-12-14 03:17:27.786594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.680 [2024-12-14 03:17:27.790670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.680 [2024-12-14 03:17:27.790853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.680 [2024-12-14 03:17:27.790873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.680 [2024-12-14 03:17:27.796154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.680 [2024-12-14 03:17:27.796387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.680 [2024-12-14 03:17:27.796407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.680 [2024-12-14 03:17:27.802103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.680 [2024-12-14 03:17:27.802292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.680 [2024-12-14 03:17:27.802317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.680 [2024-12-14 03:17:27.807246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.680 [2024-12-14 03:17:27.807473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.680 [2024-12-14 03:17:27.807497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.940 [2024-12-14 03:17:27.812896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.940 [2024-12-14 03:17:27.813047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.940 [2024-12-14 03:17:27.813067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.940 [2024-12-14 03:17:27.818872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.940 [2024-12-14 03:17:27.819047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.940 [2024-12-14 03:17:27.819067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.940 [2024-12-14 03:17:27.823298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.940 [2024-12-14 03:17:27.823467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.940 [2024-12-14 03:17:27.823486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.940 [2024-12-14 03:17:27.827441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.940 [2024-12-14 03:17:27.827602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.940 [2024-12-14 03:17:27.827622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.940 [2024-12-14 03:17:27.831362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.940 [2024-12-14 03:17:27.831530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.940 [2024-12-14 03:17:27.831550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.940 [2024-12-14 03:17:27.835152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.940 [2024-12-14 03:17:27.835310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.940 [2024-12-14 03:17:27.835335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.940 [2024-12-14 03:17:27.838866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.940 [2024-12-14 03:17:27.839021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.940 [2024-12-14 03:17:27.839041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.940 [2024-12-14 03:17:27.842767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.940 [2024-12-14 03:17:27.842925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.940 [2024-12-14 03:17:27.842944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.940 [2024-12-14 03:17:27.846667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.940 [2024-12-14 03:17:27.846822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.940 [2024-12-14 03:17:27.846842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.940 [2024-12-14 03:17:27.850594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.940 [2024-12-14 03:17:27.850753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.940 [2024-12-14 03:17:27.850773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.940 [2024-12-14 03:17:27.854489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.940 [2024-12-14 03:17:27.854654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.940 [2024-12-14 03:17:27.854674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.858163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.858333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.858352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.862032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.862195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.862215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.866481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.866641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.866661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.871479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.871636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.871655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.875403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.875564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.875583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.879168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.879339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.879359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.882983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.883142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.883162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.886770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.886928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.886948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.890669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.890824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.890844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.894308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.894467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.894487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.898101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.898261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.898281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.902378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.902529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.902548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.906690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.906857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.906877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.910580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.910729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.910748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.914515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.914673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.914695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.918429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.918586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.918605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.922332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.922527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.922548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.925961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.926117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.926137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.929881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.930029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.930048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.934956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.935108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.935127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.939220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.939391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.939410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.943132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.943279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.943299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.946907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.947074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.947094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.950644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.950808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.950827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.954370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.954530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.954549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.957960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.958111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.958130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.961697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.961860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.941 [2024-12-14 03:17:27.961880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.941 [2024-12-14 03:17:27.966001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.941 [2024-12-14 03:17:27.966148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.942 [2024-12-14 03:17:27.966168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.942 [2024-12-14 03:17:27.970473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.942 [2024-12-14 03:17:27.970639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.942 [2024-12-14 03:17:27.970659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.942 [2024-12-14 03:17:27.974289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.942 [2024-12-14 03:17:27.974457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.942 [2024-12-14 03:17:27.974476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.942 [2024-12-14 03:17:27.978093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.942 [2024-12-14 03:17:27.978245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.942 [2024-12-14 03:17:27.978264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.942 [2024-12-14 03:17:27.982215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.942 [2024-12-14 03:17:27.982389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.942 [2024-12-14 03:17:27.982408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.942 [2024-12-14 03:17:27.986051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.942 [2024-12-14 03:17:27.986199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.942 [2024-12-14 03:17:27.986218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.942 [2024-12-14 03:17:27.989822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.942 [2024-12-14 03:17:27.989988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.942 [2024-12-14 03:17:27.990007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.942 [2024-12-14 03:17:27.993553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.942 [2024-12-14 03:17:27.993704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.942 [2024-12-14 03:17:27.993723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.942 [2024-12-14 03:17:27.998172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.942 [2024-12-14 03:17:27.998355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.942 [2024-12-14 03:17:27.998374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.942 [2024-12-14 03:17:28.002374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.942 [2024-12-14 03:17:28.002566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.942 [2024-12-14 03:17:28.002586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.942 [2024-12-14 03:17:28.007135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.942 [2024-12-14 03:17:28.007415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.942 [2024-12-14 03:17:28.007435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.942 [2024-12-14 03:17:28.012702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.942 [2024-12-14 03:17:28.012871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.942 [2024-12-14 03:17:28.012890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.942 [2024-12-14 03:17:28.018649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.942 [2024-12-14 03:17:28.018863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.942 [2024-12-14 03:17:28.018884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.942 [2024-12-14 03:17:28.025630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.942 [2024-12-14 03:17:28.025877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.942 [2024-12-14 03:17:28.025899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.942 [2024-12-14 03:17:28.031602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.942 [2024-12-14 03:17:28.031826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.942 [2024-12-14 03:17:28.031845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.942 [2024-12-14 03:17:28.038353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.942 [2024-12-14 03:17:28.038643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.942 [2024-12-14 03:17:28.038663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.942 [2024-12-14 03:17:28.045366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.942 [2024-12-14 03:17:28.045617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.942 [2024-12-14 03:17:28.045637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.942 [2024-12-14 03:17:28.051613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.942 [2024-12-14 03:17:28.051839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.942 [2024-12-14 03:17:28.051859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.942 [2024-12-14 03:17:28.057893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.942 [2024-12-14 03:17:28.058041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.942 [2024-12-14 03:17:28.058061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.942 [2024-12-14 03:17:28.064839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.942 [2024-12-14 03:17:28.065036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.942 [2024-12-14 03:17:28.065056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.942 [2024-12-14 03:17:28.071016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:12.942 [2024-12-14 03:17:28.071308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.942 [2024-12-14 03:17:28.071334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.203 [2024-12-14 03:17:28.076620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.203 [2024-12-14 03:17:28.076809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.203 [2024-12-14 03:17:28.076828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.203 [2024-12-14 03:17:28.082804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.203 [2024-12-14 03:17:28.082952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.203 [2024-12-14 03:17:28.082972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.203 [2024-12-14 03:17:28.088938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.203 [2024-12-14 03:17:28.089061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.203 [2024-12-14 03:17:28.089080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.203 [2024-12-14 03:17:28.095300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.203 [2024-12-14 03:17:28.095454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.203 [2024-12-14 03:17:28.095473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.203 [2024-12-14 03:17:28.100078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.203 [2024-12-14 03:17:28.100146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.203 [2024-12-14 03:17:28.100164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.203 [2024-12-14 03:17:28.103813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.203 [2024-12-14 03:17:28.103878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.203 [2024-12-14 03:17:28.103896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.203 [2024-12-14 03:17:28.107429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.203 [2024-12-14 03:17:28.107481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.203 [2024-12-14 03:17:28.107499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.203 [2024-12-14 03:17:28.111015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.203 [2024-12-14 03:17:28.111067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.203 [2024-12-14 03:17:28.111085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.203 [2024-12-14 03:17:28.114624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.203 [2024-12-14 03:17:28.114693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.203 [2024-12-14 03:17:28.114711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.203 [2024-12-14 03:17:28.118256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.203 [2024-12-14 03:17:28.118325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.203 [2024-12-14 03:17:28.118343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.203 [2024-12-14 03:17:28.122060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.203 [2024-12-14 03:17:28.122131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.203 [2024-12-14 03:17:28.122148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.203 [2024-12-14 03:17:28.126570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.203 [2024-12-14 03:17:28.126636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.203 [2024-12-14 03:17:28.126654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.203 [2024-12-14 03:17:28.131030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.203 [2024-12-14 03:17:28.131092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.203 [2024-12-14 03:17:28.131111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.203 [2024-12-14 03:17:28.134959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.203 [2024-12-14 03:17:28.135013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.203 [2024-12-14 03:17:28.135031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.203 [2024-12-14 03:17:28.138959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.203 [2024-12-14 03:17:28.139042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.203 [2024-12-14 03:17:28.139060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.203 [2024-12-14 03:17:28.142678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.203 [2024-12-14 03:17:28.142730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.203 [2024-12-14 03:17:28.142747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.203 [2024-12-14 03:17:28.147373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.203 [2024-12-14 03:17:28.147486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.203 [2024-12-14 03:17:28.147505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.203 [2024-12-14 03:17:28.152423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.203 [2024-12-14 03:17:28.152539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.203 [2024-12-14 03:17:28.152558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.203 [2024-12-14 03:17:28.158411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.203 [2024-12-14 03:17:28.158521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.203 [2024-12-14 03:17:28.158543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.203 [2024-12-14 03:17:28.164554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.203 [2024-12-14 03:17:28.164671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.203 [2024-12-14 03:17:28.164690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.203 [2024-12-14 03:17:28.171133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.203 [2024-12-14 03:17:28.171307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.203 [2024-12-14 03:17:28.171335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.203 [2024-12-14 03:17:28.177714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.203 [2024-12-14 03:17:28.177825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.203 [2024-12-14 03:17:28.177844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.184278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.184436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.184455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.190870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.190993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.191012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.197353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.197435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.197454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.204163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.204345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.204363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.209898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.209951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.209969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.214812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.214929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.214948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.220934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.221020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.221039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.226625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.226698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.226717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.231796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.231884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.231903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.235962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.236025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.236043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.240390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.240497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.240515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.245413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.245571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.245590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.250720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.250876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.250896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.256091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.256256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.256275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.261169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.261334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.261353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.266516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.266672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.266692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.271810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.271935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.271954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.277324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.277480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.277498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.282489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.282684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.282702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.287718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.287891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.287909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.292884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.292967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.292985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.298287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.298424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.298442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.303721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.303832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.303854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.309139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.309291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.309310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.314756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.314917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.314935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.318891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.318983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.319001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.322847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.322977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.204 [2024-12-14 03:17:28.322995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.204 [2024-12-14 03:17:28.326953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.204 [2024-12-14 03:17:28.327096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.205 [2024-12-14 03:17:28.327115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.205 [2024-12-14 03:17:28.331021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.205 [2024-12-14 03:17:28.331128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.205 [2024-12-14 03:17:28.331147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.335060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.335154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.335174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.339080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.339216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.339235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.342996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.343160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.343179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.346897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.347030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.347049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.350678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.350814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.350833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.354509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.354590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.354608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.358253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.358384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.358403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.363109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.363240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.363259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.367918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.368030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.368049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.372029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.372159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.372177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.376086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.376207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.376226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.380081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.380213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.380232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.384009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.384101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.384120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.387996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.388088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.388107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.391910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.392036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.392054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.395840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.395923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.395943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.399726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.399837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.399856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.404477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.404684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.404702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.409677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.409829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.409848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.413691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.413817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.413839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.417627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.417759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.417778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.422035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.422154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.422173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.425971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.426057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.426076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.430076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.430213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.430232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.435077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.435245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.435264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.440794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.440874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.440893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.465 [2024-12-14 03:17:28.446714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.465 [2024-12-14 03:17:28.446917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.465 [2024-12-14 03:17:28.446936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.466 [2024-12-14 03:17:28.453191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.466 [2024-12-14 03:17:28.453290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.466 [2024-12-14 03:17:28.453309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.466 [2024-12-14 03:17:28.458156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.466 [2024-12-14 03:17:28.458254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.466 [2024-12-14 03:17:28.458274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.466 [2024-12-14 03:17:28.461882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.466 [2024-12-14 03:17:28.461964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.466 [2024-12-14 03:17:28.461983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.466 [2024-12-14 03:17:28.465594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.466 [2024-12-14 03:17:28.465680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.466 [2024-12-14 03:17:28.465699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.466 [2024-12-14 03:17:28.469235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.466 [2024-12-14 03:17:28.469318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.466 [2024-12-14 03:17:28.469336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.466 [2024-12-14 03:17:28.472890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.466 [2024-12-14 03:17:28.472973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.466 [2024-12-14 03:17:28.472991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.466 [2024-12-14 03:17:28.476505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.466 [2024-12-14 03:17:28.476587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.466 [2024-12-14 03:17:28.476605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.466 [2024-12-14 03:17:28.480118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.466 [2024-12-14 03:17:28.480197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.466 [2024-12-14 03:17:28.480216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.466 [2024-12-14 03:17:28.483807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.466 [2024-12-14 03:17:28.483894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.466 [2024-12-14 03:17:28.483912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.466 [2024-12-14 03:17:28.487697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.466 [2024-12-14 03:17:28.487778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.466 [2024-12-14 03:17:28.487797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.466 [2024-12-14 03:17:28.492432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10162a0) with pdu=0x200016eff3c8 00:36:13.466 [2024-12-14 03:17:28.492510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.466 [2024-12-14 03:17:28.492529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.466 6932.00 IOPS, 866.50 MiB/s 00:36:13.466 Latency(us) 00:36:13.466 [2024-12-14T02:17:28.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:13.466 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:13.466 nvme0n1 : 2.00 6929.11 866.14 0.00 0.00 2304.99 1654.00 10236.10 00:36:13.466 [2024-12-14T02:17:28.599Z] =================================================================================================================== 00:36:13.466 [2024-12-14T02:17:28.599Z] Total : 6929.11 866.14 0.00 0.00 2304.99 1654.00 10236.10 00:36:13.466 { 00:36:13.466 "results": [ 00:36:13.466 { 00:36:13.466 "job": "nvme0n1", 00:36:13.466 "core_mask": "0x2", 00:36:13.466 "workload": "randwrite", 00:36:13.466 "status": "finished", 00:36:13.466 "queue_depth": 16, 00:36:13.466 "io_size": 131072, 00:36:13.466 "runtime": 2.003142, 00:36:13.466 "iops": 6929.114361338337, 00:36:13.466 "mibps": 866.1392951672922, 00:36:13.466 "io_failed": 0, 00:36:13.466 "io_timeout": 0, 00:36:13.466 "avg_latency_us": 2304.9912578564567, 00:36:13.466 "min_latency_us": 1654.0038095238094, 00:36:13.466 "max_latency_us": 10236.099047619047 00:36:13.466 } 00:36:13.466 ], 00:36:13.466 "core_count": 1 00:36:13.466 } 00:36:13.466 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:13.466 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:13.466 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:13.466 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:13.466 | .driver_specific 00:36:13.466 | .nvme_error 00:36:13.466 | .status_code 00:36:13.466 | .command_transient_transport_error' 00:36:13.725 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 448 > 0 )) 00:36:13.725 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 386374 00:36:13.725 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 386374 ']' 00:36:13.725 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 386374 00:36:13.725 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:13.725 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:13.725 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 386374 00:36:13.725 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:13.725 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:13.725 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 386374' 00:36:13.725 killing process with pid 386374 00:36:13.725 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 386374 00:36:13.725 Received shutdown signal, test time was about 2.000000 seconds 00:36:13.725 00:36:13.725 Latency(us) 00:36:13.725 [2024-12-14T02:17:28.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:13.725 [2024-12-14T02:17:28.858Z] =================================================================================================================== 00:36:13.725 [2024-12-14T02:17:28.858Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:13.725 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 386374 00:36:13.984 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 386194 00:36:13.984 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 386194 ']' 00:36:13.984 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 386194 00:36:13.984 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:13.984 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:13.984 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 386194 00:36:13.984 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:13.984 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:13.984 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 386194' 00:36:13.984 killing process with pid 386194 00:36:13.984 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 386194 00:36:13.984 03:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 386194 00:36:14.243 00:36:14.243 real 0m13.678s 00:36:14.243 user 0m26.096s 00:36:14.243 sys 0m4.609s 00:36:14.243 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:14.243 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:14.243 ************************************ 00:36:14.243 END TEST nvmf_digest_error 00:36:14.243 ************************************ 00:36:14.243 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:36:14.243 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:36:14.243 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:14.243 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:36:14.243 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:14.243 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:36:14.243 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:14.243 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:14.243 rmmod nvme_tcp 00:36:14.243 rmmod nvme_fabrics 00:36:14.243 rmmod nvme_keyring 00:36:14.243 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:14.243 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:36:14.243 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:36:14.243 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 386194 ']' 00:36:14.243 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 386194 00:36:14.243 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 386194 ']' 00:36:14.243 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 386194 00:36:14.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (386194) - No such process 00:36:14.243 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 386194 is not found' 00:36:14.243 Process with pid 386194 is not found 00:36:14.243 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:14.243 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:14.244 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:14.244 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:36:14.244 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:36:14.244 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:14.244 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:36:14.244 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:14.244 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:14.244 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:14.244 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:14.244 03:17:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:16.776 03:17:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:16.776 00:36:16.776 real 0m35.838s 00:36:16.776 user 0m54.397s 00:36:16.777 sys 0m13.671s 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:16.777 ************************************ 00:36:16.777 END TEST nvmf_digest 00:36:16.777 ************************************ 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.777 ************************************ 00:36:16.777 START TEST nvmf_bdevperf 00:36:16.777 ************************************ 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:16.777 * Looking for test storage... 00:36:16.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:16.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.777 --rc genhtml_branch_coverage=1 00:36:16.777 --rc genhtml_function_coverage=1 00:36:16.777 --rc genhtml_legend=1 00:36:16.777 --rc geninfo_all_blocks=1 00:36:16.777 --rc geninfo_unexecuted_blocks=1 00:36:16.777 00:36:16.777 ' 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:16.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.777 --rc genhtml_branch_coverage=1 00:36:16.777 --rc genhtml_function_coverage=1 00:36:16.777 --rc genhtml_legend=1 00:36:16.777 --rc geninfo_all_blocks=1 00:36:16.777 --rc geninfo_unexecuted_blocks=1 00:36:16.777 00:36:16.777 ' 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:16.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.777 --rc genhtml_branch_coverage=1 00:36:16.777 --rc genhtml_function_coverage=1 00:36:16.777 --rc genhtml_legend=1 00:36:16.777 --rc geninfo_all_blocks=1 00:36:16.777 --rc geninfo_unexecuted_blocks=1 00:36:16.777 00:36:16.777 ' 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:16.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.777 --rc genhtml_branch_coverage=1 00:36:16.777 --rc genhtml_function_coverage=1 00:36:16.777 --rc genhtml_legend=1 00:36:16.777 --rc geninfo_all_blocks=1 00:36:16.777 --rc geninfo_unexecuted_blocks=1 00:36:16.777 00:36:16.777 ' 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:16.777 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:16.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:16.778 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:16.778 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:16.778 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:16.778 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:16.778 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:16.778 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:36:16.778 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:16.778 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:16.778 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:16.778 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:16.778 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:16.778 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:16.778 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:16.778 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:16.778 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:16.778 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:16.778 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:36:16.778 03:17:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:22.054 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:22.054 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:22.054 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:22.055 Found net devices under 0000:af:00.0: cvl_0_0 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:22.055 Found net devices under 0000:af:00.1: cvl_0_1 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:22.055 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:22.314 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:22.314 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:22.314 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:22.314 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:22.314 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:22.314 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:22.314 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:22.314 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:22.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:22.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:36:22.314 00:36:22.314 --- 10.0.0.2 ping statistics --- 00:36:22.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.314 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:36:22.314 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:22.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:22.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:36:22.314 00:36:22.314 --- 10.0.0.1 ping statistics --- 00:36:22.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.314 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:36:22.314 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:22.314 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:36:22.314 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:22.314 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:22.314 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:22.314 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:22.314 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:22.314 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:22.314 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:22.314 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:36:22.314 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:22.314 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:22.314 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:22.314 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:22.573 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=388665 00:36:22.573 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 388665 00:36:22.573 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:22.573 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 388665 ']' 00:36:22.573 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:22.573 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:22.573 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:22.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:22.573 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:22.573 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:22.573 [2024-12-14 03:17:37.496365] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:22.573 [2024-12-14 03:17:37.496407] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:22.573 [2024-12-14 03:17:37.575490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:22.573 [2024-12-14 03:17:37.597988] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:22.573 [2024-12-14 03:17:37.598022] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:22.573 [2024-12-14 03:17:37.598029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:22.573 [2024-12-14 03:17:37.598035] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:22.573 [2024-12-14 03:17:37.598040] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:22.573 [2024-12-14 03:17:37.599218] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:22.573 [2024-12-14 03:17:37.599303] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:22.573 [2024-12-14 03:17:37.599305] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:22.573 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:22.573 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:36:22.573 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:22.573 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:22.573 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:22.833 [2024-12-14 03:17:37.729569] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:22.833 Malloc0 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:22.833 [2024-12-14 03:17:37.795559] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:22.833 { 00:36:22.833 "params": { 00:36:22.833 "name": "Nvme$subsystem", 00:36:22.833 "trtype": "$TEST_TRANSPORT", 00:36:22.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:22.833 "adrfam": "ipv4", 00:36:22.833 "trsvcid": "$NVMF_PORT", 00:36:22.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:22.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:22.833 "hdgst": ${hdgst:-false}, 00:36:22.833 "ddgst": ${ddgst:-false} 00:36:22.833 }, 00:36:22.833 "method": "bdev_nvme_attach_controller" 00:36:22.833 } 00:36:22.833 EOF 00:36:22.833 )") 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:36:22.833 03:17:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:22.833 "params": { 00:36:22.833 "name": "Nvme1", 00:36:22.833 "trtype": "tcp", 00:36:22.833 "traddr": "10.0.0.2", 00:36:22.833 "adrfam": "ipv4", 00:36:22.833 "trsvcid": "4420", 00:36:22.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:22.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:22.833 "hdgst": false, 00:36:22.833 "ddgst": false 00:36:22.833 }, 00:36:22.833 "method": "bdev_nvme_attach_controller" 00:36:22.833 }' 00:36:22.833 [2024-12-14 03:17:37.846241] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:22.833 [2024-12-14 03:17:37.846282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid388690 ] 00:36:22.833 [2024-12-14 03:17:37.920280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:22.833 [2024-12-14 03:17:37.942218] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:23.092 Running I/O for 1 seconds... 00:36:24.029 11291.00 IOPS, 44.11 MiB/s 00:36:24.029 Latency(us) 00:36:24.029 [2024-12-14T02:17:39.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:24.029 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:24.029 Verification LBA range: start 0x0 length 0x4000 00:36:24.029 Nvme1n1 : 1.01 11287.76 44.09 0.00 0.00 11298.30 2231.34 14979.66 00:36:24.029 [2024-12-14T02:17:39.162Z] =================================================================================================================== 00:36:24.029 [2024-12-14T02:17:39.162Z] Total : 11287.76 44.09 0.00 0.00 11298.30 2231.34 14979.66 00:36:24.288 03:17:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=388718 00:36:24.288 03:17:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:36:24.288 03:17:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:36:24.288 03:17:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:36:24.288 03:17:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:36:24.288 03:17:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:36:24.288 03:17:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:24.288 03:17:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:24.288 { 00:36:24.288 "params": { 00:36:24.288 "name": "Nvme$subsystem", 00:36:24.288 "trtype": "$TEST_TRANSPORT", 00:36:24.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:24.288 "adrfam": "ipv4", 00:36:24.288 "trsvcid": "$NVMF_PORT", 00:36:24.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:24.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:24.288 "hdgst": ${hdgst:-false}, 00:36:24.288 "ddgst": ${ddgst:-false} 00:36:24.288 }, 00:36:24.288 "method": "bdev_nvme_attach_controller" 00:36:24.288 } 00:36:24.288 EOF 00:36:24.288 )") 00:36:24.288 03:17:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:36:24.288 03:17:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:36:24.288 03:17:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:36:24.288 03:17:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:24.288 "params": { 00:36:24.288 "name": "Nvme1", 00:36:24.288 "trtype": "tcp", 00:36:24.288 "traddr": "10.0.0.2", 00:36:24.288 "adrfam": "ipv4", 00:36:24.288 "trsvcid": "4420", 00:36:24.288 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:24.288 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:24.288 "hdgst": false, 00:36:24.288 "ddgst": false 00:36:24.288 }, 00:36:24.288 "method": "bdev_nvme_attach_controller" 00:36:24.288 }' 00:36:24.288 [2024-12-14 03:17:39.308203] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:24.288 [2024-12-14 03:17:39.308257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid388718 ] 00:36:24.288 [2024-12-14 03:17:39.385308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:24.288 [2024-12-14 03:17:39.405128] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:24.856 Running I/O for 15 seconds... 00:36:26.729 11228.00 IOPS, 43.86 MiB/s [2024-12-14T02:17:42.432Z] 11237.50 IOPS, 43.90 MiB/s [2024-12-14T02:17:42.432Z] 03:17:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 388665 00:36:27.299 03:17:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:36:27.299 [2024-12-14 03:17:42.275623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.299 [2024-12-14 03:17:42.275658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.275680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.275689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.275698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.275705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.275715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.275723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.275733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.275742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.275750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.275758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.275767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.275775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.275784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:102632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.275792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.275800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.275807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.275816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.275823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.275831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.275838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.275848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.275857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.275867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.275875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.275885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.275894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.275904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:102688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.275914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.275923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.275931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.275940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.275948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.275958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.275966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.275977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.275984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.275992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.275999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.276007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.276014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.276022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.276029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.276037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.276043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.276052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.276058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.276066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.276073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.276081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.300 [2024-12-14 03:17:42.276087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.276096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.276104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.276112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.276119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.276127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.276133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.276141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.276148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.276156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.276162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.276170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.276177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.276185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.276192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.276200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.276207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.276215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.276221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.276229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.276236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.276244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.276250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.276258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.276264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.276272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.276278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.276288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.276295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.300 [2024-12-14 03:17:42.276303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.300 [2024-12-14 03:17:42.276309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.276989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.276996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.277004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.277011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.301 [2024-12-14 03:17:42.277019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.301 [2024-12-14 03:17:42.277025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.302 [2024-12-14 03:17:42.277040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.302 [2024-12-14 03:17:42.277054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.302 [2024-12-14 03:17:42.277069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.302 [2024-12-14 03:17:42.277084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.302 [2024-12-14 03:17:42.277098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.302 [2024-12-14 03:17:42.277113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.302 [2024-12-14 03:17:42.277127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.302 [2024-12-14 03:17:42.277142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.302 [2024-12-14 03:17:42.277158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.302 [2024-12-14 03:17:42.277173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.302 [2024-12-14 03:17:42.277187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.302 [2024-12-14 03:17:42.277202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.302 [2024-12-14 03:17:42.277562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.302 [2024-12-14 03:17:42.277576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.302 [2024-12-14 03:17:42.277591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.302 [2024-12-14 03:17:42.277605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.302 [2024-12-14 03:17:42.277620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.302 [2024-12-14 03:17:42.277628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.303 [2024-12-14 03:17:42.277634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.303 [2024-12-14 03:17:42.277642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.303 [2024-12-14 03:17:42.277649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.303 [2024-12-14 03:17:42.277657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.303 [2024-12-14 03:17:42.277664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.303 [2024-12-14 03:17:42.277672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.303 [2024-12-14 03:17:42.277678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.303 [2024-12-14 03:17:42.277686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.303 [2024-12-14 03:17:42.277693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.303 [2024-12-14 03:17:42.277707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.303 [2024-12-14 03:17:42.277714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.303 [2024-12-14 03:17:42.277723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2665920 is same with the state(6) to be set 00:36:27.303 [2024-12-14 03:17:42.277732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:27.303 [2024-12-14 03:17:42.277738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:27.303 [2024-12-14 03:17:42.277745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102568 len:8 PRP1 0x0 PRP2 0x0 00:36:27.303 [2024-12-14 03:17:42.277751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.303 [2024-12-14 03:17:42.280544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.303 [2024-12-14 03:17:42.280599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.303 [2024-12-14 03:17:42.281193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.303 [2024-12-14 03:17:42.281209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.303 [2024-12-14 03:17:42.281217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.303 [2024-12-14 03:17:42.281399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.303 [2024-12-14 03:17:42.281575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.303 [2024-12-14 03:17:42.281583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.303 [2024-12-14 03:17:42.281591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.303 [2024-12-14 03:17:42.281599] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.303 [2024-12-14 03:17:42.293815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.303 [2024-12-14 03:17:42.294233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.303 [2024-12-14 03:17:42.294250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.303 [2024-12-14 03:17:42.294259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.303 [2024-12-14 03:17:42.294441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.303 [2024-12-14 03:17:42.294615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.303 [2024-12-14 03:17:42.294623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.303 [2024-12-14 03:17:42.294631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.303 [2024-12-14 03:17:42.294637] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.303 [2024-12-14 03:17:42.306709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.303 [2024-12-14 03:17:42.307142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.303 [2024-12-14 03:17:42.307159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.303 [2024-12-14 03:17:42.307166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.303 [2024-12-14 03:17:42.307343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.303 [2024-12-14 03:17:42.307516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.303 [2024-12-14 03:17:42.307524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.303 [2024-12-14 03:17:42.307531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.303 [2024-12-14 03:17:42.307537] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.303 [2024-12-14 03:17:42.319610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.303 [2024-12-14 03:17:42.320055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.303 [2024-12-14 03:17:42.320072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.303 [2024-12-14 03:17:42.320079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.303 [2024-12-14 03:17:42.320248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.303 [2024-12-14 03:17:42.320423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.303 [2024-12-14 03:17:42.320432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.303 [2024-12-14 03:17:42.320438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.303 [2024-12-14 03:17:42.320444] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.303 [2024-12-14 03:17:42.332431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.303 [2024-12-14 03:17:42.332775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.303 [2024-12-14 03:17:42.332821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.303 [2024-12-14 03:17:42.332845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.303 [2024-12-14 03:17:42.333299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.303 [2024-12-14 03:17:42.333673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.303 [2024-12-14 03:17:42.333692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.303 [2024-12-14 03:17:42.333707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.303 [2024-12-14 03:17:42.333720] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.303 [2024-12-14 03:17:42.347191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.303 [2024-12-14 03:17:42.347719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.303 [2024-12-14 03:17:42.347742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.303 [2024-12-14 03:17:42.347752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.303 [2024-12-14 03:17:42.348006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.303 [2024-12-14 03:17:42.348261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.303 [2024-12-14 03:17:42.348273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.303 [2024-12-14 03:17:42.348286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.303 [2024-12-14 03:17:42.348296] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.303 [2024-12-14 03:17:42.360171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.303 [2024-12-14 03:17:42.360617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.303 [2024-12-14 03:17:42.360663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.303 [2024-12-14 03:17:42.360686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.303 [2024-12-14 03:17:42.361281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.303 [2024-12-14 03:17:42.361456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.303 [2024-12-14 03:17:42.361465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.303 [2024-12-14 03:17:42.361471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.303 [2024-12-14 03:17:42.361478] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.303 [2024-12-14 03:17:42.372999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.303 [2024-12-14 03:17:42.373439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.303 [2024-12-14 03:17:42.373455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.303 [2024-12-14 03:17:42.373462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.303 [2024-12-14 03:17:42.373620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.303 [2024-12-14 03:17:42.373778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.303 [2024-12-14 03:17:42.373786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.303 [2024-12-14 03:17:42.373792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.303 [2024-12-14 03:17:42.373798] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.303 [2024-12-14 03:17:42.385831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.304 [2024-12-14 03:17:42.386282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.304 [2024-12-14 03:17:42.386299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.304 [2024-12-14 03:17:42.386306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.304 [2024-12-14 03:17:42.386500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.304 [2024-12-14 03:17:42.386673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.304 [2024-12-14 03:17:42.386681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.304 [2024-12-14 03:17:42.386687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.304 [2024-12-14 03:17:42.386694] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.304 [2024-12-14 03:17:42.398618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.304 [2024-12-14 03:17:42.399053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.304 [2024-12-14 03:17:42.399099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.304 [2024-12-14 03:17:42.399123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.304 [2024-12-14 03:17:42.399586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.304 [2024-12-14 03:17:42.399755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.304 [2024-12-14 03:17:42.399763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.304 [2024-12-14 03:17:42.399769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.304 [2024-12-14 03:17:42.399776] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.304 [2024-12-14 03:17:42.411358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.304 [2024-12-14 03:17:42.411685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.304 [2024-12-14 03:17:42.411701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.304 [2024-12-14 03:17:42.411707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.304 [2024-12-14 03:17:42.411866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.304 [2024-12-14 03:17:42.412025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.304 [2024-12-14 03:17:42.412032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.304 [2024-12-14 03:17:42.412038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.304 [2024-12-14 03:17:42.412044] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.304 [2024-12-14 03:17:42.424132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.304 [2024-12-14 03:17:42.424574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.304 [2024-12-14 03:17:42.424592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.304 [2024-12-14 03:17:42.424599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.304 [2024-12-14 03:17:42.424772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.304 [2024-12-14 03:17:42.424945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.304 [2024-12-14 03:17:42.424953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.304 [2024-12-14 03:17:42.424960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.304 [2024-12-14 03:17:42.424966] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.564 [2024-12-14 03:17:42.436885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.564 [2024-12-14 03:17:42.437309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.564 [2024-12-14 03:17:42.437331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.564 [2024-12-14 03:17:42.437357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.564 [2024-12-14 03:17:42.437525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.564 [2024-12-14 03:17:42.437694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.564 [2024-12-14 03:17:42.437702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.565 [2024-12-14 03:17:42.437709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.565 [2024-12-14 03:17:42.437715] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.565 [2024-12-14 03:17:42.449723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.565 [2024-12-14 03:17:42.450154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.565 [2024-12-14 03:17:42.450198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.565 [2024-12-14 03:17:42.450222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.565 [2024-12-14 03:17:42.450705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.565 [2024-12-14 03:17:42.450874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.565 [2024-12-14 03:17:42.450883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.565 [2024-12-14 03:17:42.450889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.565 [2024-12-14 03:17:42.450895] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.565 [2024-12-14 03:17:42.462564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.565 [2024-12-14 03:17:42.462979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.565 [2024-12-14 03:17:42.462996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.565 [2024-12-14 03:17:42.463002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.565 [2024-12-14 03:17:42.463161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.565 [2024-12-14 03:17:42.463326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.565 [2024-12-14 03:17:42.463334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.565 [2024-12-14 03:17:42.463357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.565 [2024-12-14 03:17:42.463364] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.565 [2024-12-14 03:17:42.475403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.565 [2024-12-14 03:17:42.475816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.565 [2024-12-14 03:17:42.475834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.565 [2024-12-14 03:17:42.475841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.565 [2024-12-14 03:17:42.476000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.565 [2024-12-14 03:17:42.476162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.565 [2024-12-14 03:17:42.476170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.565 [2024-12-14 03:17:42.476176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.565 [2024-12-14 03:17:42.476182] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.565 [2024-12-14 03:17:42.488181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.565 [2024-12-14 03:17:42.488634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.565 [2024-12-14 03:17:42.488651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.565 [2024-12-14 03:17:42.488658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.565 [2024-12-14 03:17:42.488826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.565 [2024-12-14 03:17:42.488994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.565 [2024-12-14 03:17:42.489002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.565 [2024-12-14 03:17:42.489009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.565 [2024-12-14 03:17:42.489015] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.565 [2024-12-14 03:17:42.501051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.565 [2024-12-14 03:17:42.501380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.565 [2024-12-14 03:17:42.501396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.565 [2024-12-14 03:17:42.501404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.565 [2024-12-14 03:17:42.501563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.565 [2024-12-14 03:17:42.501722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.565 [2024-12-14 03:17:42.501729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.565 [2024-12-14 03:17:42.501736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.565 [2024-12-14 03:17:42.501742] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.565 [2024-12-14 03:17:42.513840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.565 [2024-12-14 03:17:42.514263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.565 [2024-12-14 03:17:42.514309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.565 [2024-12-14 03:17:42.514382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.565 [2024-12-14 03:17:42.514834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.565 [2024-12-14 03:17:42.515003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.565 [2024-12-14 03:17:42.515011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.565 [2024-12-14 03:17:42.515021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.565 [2024-12-14 03:17:42.515028] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.565 [2024-12-14 03:17:42.526619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.565 [2024-12-14 03:17:42.527054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.565 [2024-12-14 03:17:42.527071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.565 [2024-12-14 03:17:42.527079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.565 [2024-12-14 03:17:42.527252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.565 [2024-12-14 03:17:42.527461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.565 [2024-12-14 03:17:42.527471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.565 [2024-12-14 03:17:42.527478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.565 [2024-12-14 03:17:42.527484] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.565 [2024-12-14 03:17:42.539694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.565 [2024-12-14 03:17:42.540033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.565 [2024-12-14 03:17:42.540050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.565 [2024-12-14 03:17:42.540058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.565 [2024-12-14 03:17:42.540231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.565 [2024-12-14 03:17:42.540409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.565 [2024-12-14 03:17:42.540418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.565 [2024-12-14 03:17:42.540424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.565 [2024-12-14 03:17:42.540431] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.565 [2024-12-14 03:17:42.552780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.565 [2024-12-14 03:17:42.553230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.565 [2024-12-14 03:17:42.553247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.565 [2024-12-14 03:17:42.553254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.565 [2024-12-14 03:17:42.553426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.565 [2024-12-14 03:17:42.553615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.565 [2024-12-14 03:17:42.553623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.565 [2024-12-14 03:17:42.553630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.565 [2024-12-14 03:17:42.553636] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.565 [2024-12-14 03:17:42.565658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.565 [2024-12-14 03:17:42.566059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.565 [2024-12-14 03:17:42.566075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.565 [2024-12-14 03:17:42.566082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.565 [2024-12-14 03:17:42.566250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.565 [2024-12-14 03:17:42.566439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.565 [2024-12-14 03:17:42.566448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.565 [2024-12-14 03:17:42.566454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.566 [2024-12-14 03:17:42.566461] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.566 [2024-12-14 03:17:42.578404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.566 [2024-12-14 03:17:42.578855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.566 [2024-12-14 03:17:42.578901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.566 [2024-12-14 03:17:42.578924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.566 [2024-12-14 03:17:42.579525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.566 [2024-12-14 03:17:42.580029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.566 [2024-12-14 03:17:42.580037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.566 [2024-12-14 03:17:42.580043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.566 [2024-12-14 03:17:42.580049] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.566 [2024-12-14 03:17:42.591162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.566 [2024-12-14 03:17:42.591610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.566 [2024-12-14 03:17:42.591628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.566 [2024-12-14 03:17:42.591636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.566 [2024-12-14 03:17:42.591805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.566 [2024-12-14 03:17:42.591972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.566 [2024-12-14 03:17:42.591980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.566 [2024-12-14 03:17:42.591987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.566 [2024-12-14 03:17:42.591993] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.566 [2024-12-14 03:17:42.604034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.566 [2024-12-14 03:17:42.604403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.566 [2024-12-14 03:17:42.604449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.566 [2024-12-14 03:17:42.604487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.566 [2024-12-14 03:17:42.604946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.566 [2024-12-14 03:17:42.605114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.566 [2024-12-14 03:17:42.605122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.566 [2024-12-14 03:17:42.605129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.566 [2024-12-14 03:17:42.605135] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.566 [2024-12-14 03:17:42.616873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.566 [2024-12-14 03:17:42.617264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.566 [2024-12-14 03:17:42.617281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.566 [2024-12-14 03:17:42.617287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.566 [2024-12-14 03:17:42.617474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.566 [2024-12-14 03:17:42.617643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.566 [2024-12-14 03:17:42.617651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.566 [2024-12-14 03:17:42.617657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.566 [2024-12-14 03:17:42.617664] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.566 [2024-12-14 03:17:42.629644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.566 [2024-12-14 03:17:42.630094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.566 [2024-12-14 03:17:42.630140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.566 [2024-12-14 03:17:42.630164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.566 [2024-12-14 03:17:42.630765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.566 [2024-12-14 03:17:42.631139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.566 [2024-12-14 03:17:42.631147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.566 [2024-12-14 03:17:42.631154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.566 [2024-12-14 03:17:42.631160] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.566 [2024-12-14 03:17:42.642442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.566 [2024-12-14 03:17:42.642862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.566 [2024-12-14 03:17:42.642908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.566 [2024-12-14 03:17:42.642931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.566 [2024-12-14 03:17:42.643531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.566 [2024-12-14 03:17:42.644128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.566 [2024-12-14 03:17:42.644154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.566 [2024-12-14 03:17:42.644160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.566 [2024-12-14 03:17:42.644167] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.566 [2024-12-14 03:17:42.655712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.566 [2024-12-14 03:17:42.656149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.566 [2024-12-14 03:17:42.656167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.566 [2024-12-14 03:17:42.656175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.566 [2024-12-14 03:17:42.656355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.566 [2024-12-14 03:17:42.656529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.566 [2024-12-14 03:17:42.656537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.566 [2024-12-14 03:17:42.656544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.566 [2024-12-14 03:17:42.656550] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.566 [2024-12-14 03:17:42.668470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.566 [2024-12-14 03:17:42.668801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.566 [2024-12-14 03:17:42.668817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.566 [2024-12-14 03:17:42.668824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.566 [2024-12-14 03:17:42.668983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.566 [2024-12-14 03:17:42.669142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.566 [2024-12-14 03:17:42.669149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.566 [2024-12-14 03:17:42.669155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.566 [2024-12-14 03:17:42.669162] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.566 [2024-12-14 03:17:42.681209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.566 [2024-12-14 03:17:42.681583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.566 [2024-12-14 03:17:42.681630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.566 [2024-12-14 03:17:42.681654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.566 [2024-12-14 03:17:42.682194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.566 [2024-12-14 03:17:42.682374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.566 [2024-12-14 03:17:42.682384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.566 [2024-12-14 03:17:42.682393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.566 [2024-12-14 03:17:42.682400] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.566 9724.67 IOPS, 37.99 MiB/s [2024-12-14T02:17:42.699Z] [2024-12-14 03:17:42.695001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.566 [2024-12-14 03:17:42.695456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.566 [2024-12-14 03:17:42.695473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.566 [2024-12-14 03:17:42.695481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.566 [2024-12-14 03:17:42.695680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.827 [2024-12-14 03:17:42.695853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.827 [2024-12-14 03:17:42.695862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.827 [2024-12-14 03:17:42.695868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.827 [2024-12-14 03:17:42.695875] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.827 [2024-12-14 03:17:42.707818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.827 [2024-12-14 03:17:42.708274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.827 [2024-12-14 03:17:42.708331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.827 [2024-12-14 03:17:42.708356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.827 [2024-12-14 03:17:42.708939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.827 [2024-12-14 03:17:42.709336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.827 [2024-12-14 03:17:42.709344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.827 [2024-12-14 03:17:42.709351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.827 [2024-12-14 03:17:42.709357] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.827 [2024-12-14 03:17:42.720643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.827 [2024-12-14 03:17:42.721035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.827 [2024-12-14 03:17:42.721052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.827 [2024-12-14 03:17:42.721059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.827 [2024-12-14 03:17:42.721217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.827 [2024-12-14 03:17:42.721400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.827 [2024-12-14 03:17:42.721409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.827 [2024-12-14 03:17:42.721416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.827 [2024-12-14 03:17:42.721422] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.827 [2024-12-14 03:17:42.733401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.827 [2024-12-14 03:17:42.733848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.827 [2024-12-14 03:17:42.733865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.827 [2024-12-14 03:17:42.733872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.827 [2024-12-14 03:17:42.734040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.827 [2024-12-14 03:17:42.734208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.827 [2024-12-14 03:17:42.734216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.827 [2024-12-14 03:17:42.734222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.827 [2024-12-14 03:17:42.734229] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.827 [2024-12-14 03:17:42.746215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.827 [2024-12-14 03:17:42.746648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.827 [2024-12-14 03:17:42.746665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.827 [2024-12-14 03:17:42.746673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.827 [2024-12-14 03:17:42.746841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.827 [2024-12-14 03:17:42.747009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.827 [2024-12-14 03:17:42.747017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.827 [2024-12-14 03:17:42.747023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.827 [2024-12-14 03:17:42.747029] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.827 [2024-12-14 03:17:42.759127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.827 [2024-12-14 03:17:42.759568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.827 [2024-12-14 03:17:42.759613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.827 [2024-12-14 03:17:42.759637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.827 [2024-12-14 03:17:42.760182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.827 [2024-12-14 03:17:42.760358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.827 [2024-12-14 03:17:42.760366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.827 [2024-12-14 03:17:42.760373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.827 [2024-12-14 03:17:42.760379] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.827 [2024-12-14 03:17:42.771935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.827 [2024-12-14 03:17:42.772349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.827 [2024-12-14 03:17:42.772394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.827 [2024-12-14 03:17:42.772425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.827 [2024-12-14 03:17:42.773009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.827 [2024-12-14 03:17:42.773462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.827 [2024-12-14 03:17:42.773471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.827 [2024-12-14 03:17:42.773477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.827 [2024-12-14 03:17:42.773483] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.827 [2024-12-14 03:17:42.784774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.827 [2024-12-14 03:17:42.785197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.827 [2024-12-14 03:17:42.785214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.827 [2024-12-14 03:17:42.785221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.827 [2024-12-14 03:17:42.785401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.827 [2024-12-14 03:17:42.785574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.827 [2024-12-14 03:17:42.785582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.827 [2024-12-14 03:17:42.785589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.827 [2024-12-14 03:17:42.785595] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.827 [2024-12-14 03:17:42.797757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.827 [2024-12-14 03:17:42.798155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.827 [2024-12-14 03:17:42.798171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.827 [2024-12-14 03:17:42.798179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.827 [2024-12-14 03:17:42.798359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.827 [2024-12-14 03:17:42.798533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.827 [2024-12-14 03:17:42.798552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.827 [2024-12-14 03:17:42.798558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.827 [2024-12-14 03:17:42.798565] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.827 [2024-12-14 03:17:42.810731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.827 [2024-12-14 03:17:42.811110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.827 [2024-12-14 03:17:42.811127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.827 [2024-12-14 03:17:42.811134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.827 [2024-12-14 03:17:42.811302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.827 [2024-12-14 03:17:42.811482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.828 [2024-12-14 03:17:42.811490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.828 [2024-12-14 03:17:42.811496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.828 [2024-12-14 03:17:42.811503] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.828 [2024-12-14 03:17:42.823472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.828 [2024-12-14 03:17:42.823885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.828 [2024-12-14 03:17:42.823930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.828 [2024-12-14 03:17:42.823954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.828 [2024-12-14 03:17:42.824545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.828 [2024-12-14 03:17:42.824714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.828 [2024-12-14 03:17:42.824722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.828 [2024-12-14 03:17:42.824728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.828 [2024-12-14 03:17:42.824734] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.828 [2024-12-14 03:17:42.836424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.828 [2024-12-14 03:17:42.836840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.828 [2024-12-14 03:17:42.836857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.828 [2024-12-14 03:17:42.836864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.828 [2024-12-14 03:17:42.837032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.828 [2024-12-14 03:17:42.837199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.828 [2024-12-14 03:17:42.837207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.828 [2024-12-14 03:17:42.837214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.828 [2024-12-14 03:17:42.837220] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.828 [2024-12-14 03:17:42.849195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.828 [2024-12-14 03:17:42.849620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.828 [2024-12-14 03:17:42.849665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.828 [2024-12-14 03:17:42.849688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.828 [2024-12-14 03:17:42.850271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.828 [2024-12-14 03:17:42.850511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.828 [2024-12-14 03:17:42.850520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.828 [2024-12-14 03:17:42.850530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.828 [2024-12-14 03:17:42.850536] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.828 [2024-12-14 03:17:42.862000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.828 [2024-12-14 03:17:42.862402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.828 [2024-12-14 03:17:42.862448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.828 [2024-12-14 03:17:42.862472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.828 [2024-12-14 03:17:42.863057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.828 [2024-12-14 03:17:42.863644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.828 [2024-12-14 03:17:42.863652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.828 [2024-12-14 03:17:42.863659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.828 [2024-12-14 03:17:42.863665] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.828 [2024-12-14 03:17:42.874806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.828 [2024-12-14 03:17:42.875208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.828 [2024-12-14 03:17:42.875224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.828 [2024-12-14 03:17:42.875231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.828 [2024-12-14 03:17:42.875415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.828 [2024-12-14 03:17:42.875584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.828 [2024-12-14 03:17:42.875592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.828 [2024-12-14 03:17:42.875598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.828 [2024-12-14 03:17:42.875604] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.828 [2024-12-14 03:17:42.887590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.828 [2024-12-14 03:17:42.887980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.828 [2024-12-14 03:17:42.887997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.828 [2024-12-14 03:17:42.888003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.828 [2024-12-14 03:17:42.888163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.828 [2024-12-14 03:17:42.888329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.828 [2024-12-14 03:17:42.888337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.828 [2024-12-14 03:17:42.888360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.828 [2024-12-14 03:17:42.888367] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.828 [2024-12-14 03:17:42.900537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.828 [2024-12-14 03:17:42.900948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.828 [2024-12-14 03:17:42.900994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.828 [2024-12-14 03:17:42.901018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.828 [2024-12-14 03:17:42.901455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.828 [2024-12-14 03:17:42.901624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.828 [2024-12-14 03:17:42.901632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.828 [2024-12-14 03:17:42.901638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.828 [2024-12-14 03:17:42.901645] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.828 [2024-12-14 03:17:42.913380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.828 [2024-12-14 03:17:42.913775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.828 [2024-12-14 03:17:42.913791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.828 [2024-12-14 03:17:42.913798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.828 [2024-12-14 03:17:42.913957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.828 [2024-12-14 03:17:42.914116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.828 [2024-12-14 03:17:42.914124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.828 [2024-12-14 03:17:42.914130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.828 [2024-12-14 03:17:42.914136] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.828 [2024-12-14 03:17:42.926188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.828 [2024-12-14 03:17:42.926607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.828 [2024-12-14 03:17:42.926624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.828 [2024-12-14 03:17:42.926632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.828 [2024-12-14 03:17:42.926800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.828 [2024-12-14 03:17:42.926967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.828 [2024-12-14 03:17:42.926975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.828 [2024-12-14 03:17:42.926982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.828 [2024-12-14 03:17:42.926988] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.828 [2024-12-14 03:17:42.939031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.828 [2024-12-14 03:17:42.939474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.828 [2024-12-14 03:17:42.939503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.828 [2024-12-14 03:17:42.939550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.828 [2024-12-14 03:17:42.940135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.828 [2024-12-14 03:17:42.940629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.828 [2024-12-14 03:17:42.940647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.828 [2024-12-14 03:17:42.940662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.829 [2024-12-14 03:17:42.940675] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.829 [2024-12-14 03:17:42.954092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.829 [2024-12-14 03:17:42.954614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.829 [2024-12-14 03:17:42.954661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:27.829 [2024-12-14 03:17:42.954686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:27.829 [2024-12-14 03:17:42.955270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:27.829 [2024-12-14 03:17:42.955871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.829 [2024-12-14 03:17:42.955898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.829 [2024-12-14 03:17:42.955920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.829 [2024-12-14 03:17:42.955940] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.089 [2024-12-14 03:17:42.967054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.089 [2024-12-14 03:17:42.967418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.089 [2024-12-14 03:17:42.967434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.089 [2024-12-14 03:17:42.967442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.089 [2024-12-14 03:17:42.967610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.089 [2024-12-14 03:17:42.967778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.089 [2024-12-14 03:17:42.967786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.089 [2024-12-14 03:17:42.967792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.089 [2024-12-14 03:17:42.967799] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.089 [2024-12-14 03:17:42.979798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.089 [2024-12-14 03:17:42.980211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.089 [2024-12-14 03:17:42.980228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.089 [2024-12-14 03:17:42.980235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.089 [2024-12-14 03:17:42.980410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.089 [2024-12-14 03:17:42.980582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.089 [2024-12-14 03:17:42.980590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.089 [2024-12-14 03:17:42.980597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.089 [2024-12-14 03:17:42.980603] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.089 [2024-12-14 03:17:42.992654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.089 [2024-12-14 03:17:42.993042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.089 [2024-12-14 03:17:42.993057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.089 [2024-12-14 03:17:42.993064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.089 [2024-12-14 03:17:42.993224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.089 [2024-12-14 03:17:42.993390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.089 [2024-12-14 03:17:42.993398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.089 [2024-12-14 03:17:42.993404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.089 [2024-12-14 03:17:42.993410] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.089 [2024-12-14 03:17:43.005452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.089 [2024-12-14 03:17:43.005872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.089 [2024-12-14 03:17:43.005890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.089 [2024-12-14 03:17:43.005897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.089 [2024-12-14 03:17:43.006065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.089 [2024-12-14 03:17:43.006233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.089 [2024-12-14 03:17:43.006241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.089 [2024-12-14 03:17:43.006247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.089 [2024-12-14 03:17:43.006254] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.089 [2024-12-14 03:17:43.018291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.089 [2024-12-14 03:17:43.018711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.089 [2024-12-14 03:17:43.018729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.089 [2024-12-14 03:17:43.018736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.089 [2024-12-14 03:17:43.018894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.089 [2024-12-14 03:17:43.019053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.089 [2024-12-14 03:17:43.019061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.089 [2024-12-14 03:17:43.019070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.089 [2024-12-14 03:17:43.019076] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.089 [2024-12-14 03:17:43.031031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.089 [2024-12-14 03:17:43.031419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.089 [2024-12-14 03:17:43.031472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.089 [2024-12-14 03:17:43.031496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.089 [2024-12-14 03:17:43.032065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.089 [2024-12-14 03:17:43.032464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.089 [2024-12-14 03:17:43.032483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.089 [2024-12-14 03:17:43.032497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.089 [2024-12-14 03:17:43.032512] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.089 [2024-12-14 03:17:43.045961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.089 [2024-12-14 03:17:43.046450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.089 [2024-12-14 03:17:43.046472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.089 [2024-12-14 03:17:43.046482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.089 [2024-12-14 03:17:43.046737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.089 [2024-12-14 03:17:43.046993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.089 [2024-12-14 03:17:43.047004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.089 [2024-12-14 03:17:43.047014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.089 [2024-12-14 03:17:43.047024] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.089 [2024-12-14 03:17:43.059071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.089 [2024-12-14 03:17:43.059473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.089 [2024-12-14 03:17:43.059490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.089 [2024-12-14 03:17:43.059498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.089 [2024-12-14 03:17:43.059672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.089 [2024-12-14 03:17:43.059844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.089 [2024-12-14 03:17:43.059852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.089 [2024-12-14 03:17:43.059859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.089 [2024-12-14 03:17:43.059865] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.089 [2024-12-14 03:17:43.072099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.089 [2024-12-14 03:17:43.072475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.089 [2024-12-14 03:17:43.072492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.089 [2024-12-14 03:17:43.072499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.090 [2024-12-14 03:17:43.072668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.090 [2024-12-14 03:17:43.072838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.090 [2024-12-14 03:17:43.072846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.090 [2024-12-14 03:17:43.072852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.090 [2024-12-14 03:17:43.072859] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.090 [2024-12-14 03:17:43.084908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.090 [2024-12-14 03:17:43.085344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.090 [2024-12-14 03:17:43.085361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.090 [2024-12-14 03:17:43.085369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.090 [2024-12-14 03:17:43.085538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.090 [2024-12-14 03:17:43.085707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.090 [2024-12-14 03:17:43.085716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.090 [2024-12-14 03:17:43.085722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.090 [2024-12-14 03:17:43.085729] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.090 [2024-12-14 03:17:43.097768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.090 [2024-12-14 03:17:43.098180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.090 [2024-12-14 03:17:43.098196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.090 [2024-12-14 03:17:43.098203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.090 [2024-12-14 03:17:43.098378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.090 [2024-12-14 03:17:43.098547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.090 [2024-12-14 03:17:43.098556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.090 [2024-12-14 03:17:43.098562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.090 [2024-12-14 03:17:43.098569] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.090 [2024-12-14 03:17:43.110548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.090 [2024-12-14 03:17:43.110987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.090 [2024-12-14 03:17:43.111005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.090 [2024-12-14 03:17:43.111015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.090 [2024-12-14 03:17:43.111184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.090 [2024-12-14 03:17:43.111361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.090 [2024-12-14 03:17:43.111370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.090 [2024-12-14 03:17:43.111377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.090 [2024-12-14 03:17:43.111384] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.090 [2024-12-14 03:17:43.123497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.090 [2024-12-14 03:17:43.123841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.090 [2024-12-14 03:17:43.123887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.090 [2024-12-14 03:17:43.123910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.090 [2024-12-14 03:17:43.124423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.090 [2024-12-14 03:17:43.124592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.090 [2024-12-14 03:17:43.124601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.090 [2024-12-14 03:17:43.124607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.090 [2024-12-14 03:17:43.124613] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.090 [2024-12-14 03:17:43.136455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.090 [2024-12-14 03:17:43.136880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.090 [2024-12-14 03:17:43.136897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.090 [2024-12-14 03:17:43.136904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.090 [2024-12-14 03:17:43.137072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.090 [2024-12-14 03:17:43.137245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.090 [2024-12-14 03:17:43.137253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.090 [2024-12-14 03:17:43.137259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.090 [2024-12-14 03:17:43.137265] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.090 [2024-12-14 03:17:43.149258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.090 [2024-12-14 03:17:43.149670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.090 [2024-12-14 03:17:43.149687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.090 [2024-12-14 03:17:43.149694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.090 [2024-12-14 03:17:43.149853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.090 [2024-12-14 03:17:43.150015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.090 [2024-12-14 03:17:43.150023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.090 [2024-12-14 03:17:43.150029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.090 [2024-12-14 03:17:43.150035] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.090 [2024-12-14 03:17:43.162110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.090 [2024-12-14 03:17:43.162535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.090 [2024-12-14 03:17:43.162552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.090 [2024-12-14 03:17:43.162559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.090 [2024-12-14 03:17:43.162727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.090 [2024-12-14 03:17:43.162896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.090 [2024-12-14 03:17:43.162904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.090 [2024-12-14 03:17:43.162910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.090 [2024-12-14 03:17:43.162917] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.090 [2024-12-14 03:17:43.174964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.090 [2024-12-14 03:17:43.175380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.090 [2024-12-14 03:17:43.175416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.090 [2024-12-14 03:17:43.175442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.090 [2024-12-14 03:17:43.176024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.090 [2024-12-14 03:17:43.176214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.090 [2024-12-14 03:17:43.176221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.090 [2024-12-14 03:17:43.176227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.090 [2024-12-14 03:17:43.176233] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.090 [2024-12-14 03:17:43.187701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.090 [2024-12-14 03:17:43.188126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.090 [2024-12-14 03:17:43.188143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.090 [2024-12-14 03:17:43.188150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.090 [2024-12-14 03:17:43.188325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.090 [2024-12-14 03:17:43.188494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.090 [2024-12-14 03:17:43.188502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.090 [2024-12-14 03:17:43.188511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.090 [2024-12-14 03:17:43.188518] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.090 [2024-12-14 03:17:43.200552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.090 [2024-12-14 03:17:43.200918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.090 [2024-12-14 03:17:43.200934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.090 [2024-12-14 03:17:43.200940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.090 [2024-12-14 03:17:43.201100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.090 [2024-12-14 03:17:43.201259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.091 [2024-12-14 03:17:43.201266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.091 [2024-12-14 03:17:43.201272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.091 [2024-12-14 03:17:43.201278] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.091 [2024-12-14 03:17:43.213418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.091 [2024-12-14 03:17:43.213833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.091 [2024-12-14 03:17:43.213849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.091 [2024-12-14 03:17:43.213856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.091 [2024-12-14 03:17:43.214024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.091 [2024-12-14 03:17:43.214193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.091 [2024-12-14 03:17:43.214201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.091 [2024-12-14 03:17:43.214207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.091 [2024-12-14 03:17:43.214213] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.351 [2024-12-14 03:17:43.226449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.351 [2024-12-14 03:17:43.226863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.351 [2024-12-14 03:17:43.226880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.351 [2024-12-14 03:17:43.226888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.351 [2024-12-14 03:17:43.227062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.351 [2024-12-14 03:17:43.227248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.351 [2024-12-14 03:17:43.227256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.351 [2024-12-14 03:17:43.227262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.351 [2024-12-14 03:17:43.227269] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.351 [2024-12-14 03:17:43.239255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.351 [2024-12-14 03:17:43.239675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.351 [2024-12-14 03:17:43.239691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.351 [2024-12-14 03:17:43.239699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.351 [2024-12-14 03:17:43.239866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.351 [2024-12-14 03:17:43.240034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.351 [2024-12-14 03:17:43.240043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.351 [2024-12-14 03:17:43.240049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.351 [2024-12-14 03:17:43.240055] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.351 [2024-12-14 03:17:43.252080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.351 [2024-12-14 03:17:43.252468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.351 [2024-12-14 03:17:43.252484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.351 [2024-12-14 03:17:43.252491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.351 [2024-12-14 03:17:43.252651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.351 [2024-12-14 03:17:43.252810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.351 [2024-12-14 03:17:43.252817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.351 [2024-12-14 03:17:43.252823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.351 [2024-12-14 03:17:43.252829] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.351 [2024-12-14 03:17:43.264887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.351 [2024-12-14 03:17:43.265300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.351 [2024-12-14 03:17:43.265322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.351 [2024-12-14 03:17:43.265330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.351 [2024-12-14 03:17:43.265498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.351 [2024-12-14 03:17:43.265665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.351 [2024-12-14 03:17:43.265673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.351 [2024-12-14 03:17:43.265679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.351 [2024-12-14 03:17:43.265686] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.351 [2024-12-14 03:17:43.277716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.351 [2024-12-14 03:17:43.278109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.351 [2024-12-14 03:17:43.278124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.351 [2024-12-14 03:17:43.278134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.351 [2024-12-14 03:17:43.278294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.351 [2024-12-14 03:17:43.278483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.351 [2024-12-14 03:17:43.278492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.351 [2024-12-14 03:17:43.278499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.351 [2024-12-14 03:17:43.278505] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.351 [2024-12-14 03:17:43.290499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.351 [2024-12-14 03:17:43.290879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.351 [2024-12-14 03:17:43.290895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.351 [2024-12-14 03:17:43.290902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.351 [2024-12-14 03:17:43.291061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.351 [2024-12-14 03:17:43.291220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.351 [2024-12-14 03:17:43.291228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.351 [2024-12-14 03:17:43.291234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.351 [2024-12-14 03:17:43.291240] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.351 [2024-12-14 03:17:43.303244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.351 [2024-12-14 03:17:43.303679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.351 [2024-12-14 03:17:43.303697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.351 [2024-12-14 03:17:43.303705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.351 [2024-12-14 03:17:43.303878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.351 [2024-12-14 03:17:43.304055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.351 [2024-12-14 03:17:43.304063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.351 [2024-12-14 03:17:43.304070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.351 [2024-12-14 03:17:43.304076] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.351 [2024-12-14 03:17:43.316251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.351 [2024-12-14 03:17:43.316719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.351 [2024-12-14 03:17:43.316766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.351 [2024-12-14 03:17:43.316791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.351 [2024-12-14 03:17:43.317384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.351 [2024-12-14 03:17:43.317834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.351 [2024-12-14 03:17:43.317842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.351 [2024-12-14 03:17:43.317848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.351 [2024-12-14 03:17:43.317854] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.351 [2024-12-14 03:17:43.329220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.351 [2024-12-14 03:17:43.329639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.352 [2024-12-14 03:17:43.329657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.352 [2024-12-14 03:17:43.329664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.352 [2024-12-14 03:17:43.329833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.352 [2024-12-14 03:17:43.329999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.352 [2024-12-14 03:17:43.330007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.352 [2024-12-14 03:17:43.330014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.352 [2024-12-14 03:17:43.330020] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.352 [2024-12-14 03:17:43.342109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.352 [2024-12-14 03:17:43.342526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.352 [2024-12-14 03:17:43.342543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.352 [2024-12-14 03:17:43.342550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.352 [2024-12-14 03:17:43.342717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.352 [2024-12-14 03:17:43.342884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.352 [2024-12-14 03:17:43.342893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.352 [2024-12-14 03:17:43.342899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.352 [2024-12-14 03:17:43.342905] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.352 [2024-12-14 03:17:43.354984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.352 [2024-12-14 03:17:43.355422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.352 [2024-12-14 03:17:43.355440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.352 [2024-12-14 03:17:43.355448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.352 [2024-12-14 03:17:43.355621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.352 [2024-12-14 03:17:43.355800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.352 [2024-12-14 03:17:43.355808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.352 [2024-12-14 03:17:43.355818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.352 [2024-12-14 03:17:43.355825] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.352 [2024-12-14 03:17:43.367857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.352 [2024-12-14 03:17:43.368270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.352 [2024-12-14 03:17:43.368286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.352 [2024-12-14 03:17:43.368293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.352 [2024-12-14 03:17:43.368468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.352 [2024-12-14 03:17:43.368637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.352 [2024-12-14 03:17:43.368645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.352 [2024-12-14 03:17:43.368651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.352 [2024-12-14 03:17:43.368657] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.352 [2024-12-14 03:17:43.380691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.352 [2024-12-14 03:17:43.381106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.352 [2024-12-14 03:17:43.381123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.352 [2024-12-14 03:17:43.381130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.352 [2024-12-14 03:17:43.381298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.352 [2024-12-14 03:17:43.381474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.352 [2024-12-14 03:17:43.381482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.352 [2024-12-14 03:17:43.381489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.352 [2024-12-14 03:17:43.381495] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.352 [2024-12-14 03:17:43.393529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.352 [2024-12-14 03:17:43.393921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.352 [2024-12-14 03:17:43.393938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.352 [2024-12-14 03:17:43.393945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.352 [2024-12-14 03:17:43.394104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.352 [2024-12-14 03:17:43.394263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.352 [2024-12-14 03:17:43.394271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.352 [2024-12-14 03:17:43.394277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.352 [2024-12-14 03:17:43.394283] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.352 [2024-12-14 03:17:43.406329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.352 [2024-12-14 03:17:43.406742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.352 [2024-12-14 03:17:43.406759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.352 [2024-12-14 03:17:43.406766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.352 [2024-12-14 03:17:43.406934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.352 [2024-12-14 03:17:43.407102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.352 [2024-12-14 03:17:43.407110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.352 [2024-12-14 03:17:43.407116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.352 [2024-12-14 03:17:43.407122] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.352 [2024-12-14 03:17:43.419164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.352 [2024-12-14 03:17:43.419581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.352 [2024-12-14 03:17:43.419598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.352 [2024-12-14 03:17:43.419606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.352 [2024-12-14 03:17:43.419774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.352 [2024-12-14 03:17:43.419942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.352 [2024-12-14 03:17:43.419950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.352 [2024-12-14 03:17:43.419956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.352 [2024-12-14 03:17:43.419963] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.352 [2024-12-14 03:17:43.431944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.352 [2024-12-14 03:17:43.432372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.352 [2024-12-14 03:17:43.432418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.352 [2024-12-14 03:17:43.432441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.352 [2024-12-14 03:17:43.432943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.352 [2024-12-14 03:17:43.433112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.352 [2024-12-14 03:17:43.433120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.352 [2024-12-14 03:17:43.433127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.352 [2024-12-14 03:17:43.433133] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.352 [2024-12-14 03:17:43.444810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.352 [2024-12-14 03:17:43.445195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.352 [2024-12-14 03:17:43.445248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.352 [2024-12-14 03:17:43.445279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.352 [2024-12-14 03:17:43.445878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.352 [2024-12-14 03:17:43.446069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.352 [2024-12-14 03:17:43.446077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.352 [2024-12-14 03:17:43.446083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.352 [2024-12-14 03:17:43.446089] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.352 [2024-12-14 03:17:43.457610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.352 [2024-12-14 03:17:43.458000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.352 [2024-12-14 03:17:43.458017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.353 [2024-12-14 03:17:43.458024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.353 [2024-12-14 03:17:43.458192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.353 [2024-12-14 03:17:43.458366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.353 [2024-12-14 03:17:43.458375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.353 [2024-12-14 03:17:43.458381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.353 [2024-12-14 03:17:43.458387] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.353 [2024-12-14 03:17:43.470417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.353 [2024-12-14 03:17:43.470805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.353 [2024-12-14 03:17:43.470821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.353 [2024-12-14 03:17:43.470827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.353 [2024-12-14 03:17:43.470987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.353 [2024-12-14 03:17:43.471146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.353 [2024-12-14 03:17:43.471153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.353 [2024-12-14 03:17:43.471159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.353 [2024-12-14 03:17:43.471165] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.613 [2024-12-14 03:17:43.483437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.613 [2024-12-14 03:17:43.483865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.613 [2024-12-14 03:17:43.483909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.613 [2024-12-14 03:17:43.483933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.613 [2024-12-14 03:17:43.484165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.613 [2024-12-14 03:17:43.484343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.613 [2024-12-14 03:17:43.484352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.613 [2024-12-14 03:17:43.484358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.613 [2024-12-14 03:17:43.484365] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.613 [2024-12-14 03:17:43.496180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.613 [2024-12-14 03:17:43.496604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.613 [2024-12-14 03:17:43.496621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.613 [2024-12-14 03:17:43.496628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.613 [2024-12-14 03:17:43.496796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.613 [2024-12-14 03:17:43.496964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.613 [2024-12-14 03:17:43.496972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.613 [2024-12-14 03:17:43.496978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.613 [2024-12-14 03:17:43.496984] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.613 [2024-12-14 03:17:43.509020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.613 [2024-12-14 03:17:43.509434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.613 [2024-12-14 03:17:43.509451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.613 [2024-12-14 03:17:43.509459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.613 [2024-12-14 03:17:43.509627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.613 [2024-12-14 03:17:43.509794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.613 [2024-12-14 03:17:43.509802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.613 [2024-12-14 03:17:43.509808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.613 [2024-12-14 03:17:43.509815] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.613 [2024-12-14 03:17:43.521852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.613 [2024-12-14 03:17:43.522244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.613 [2024-12-14 03:17:43.522260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.613 [2024-12-14 03:17:43.522267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.613 [2024-12-14 03:17:43.522457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.613 [2024-12-14 03:17:43.522627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.613 [2024-12-14 03:17:43.522636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.613 [2024-12-14 03:17:43.522645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.613 [2024-12-14 03:17:43.522652] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.613 [2024-12-14 03:17:43.534631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.613 [2024-12-14 03:17:43.534964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.613 [2024-12-14 03:17:43.534980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.613 [2024-12-14 03:17:43.534987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.613 [2024-12-14 03:17:43.535155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.613 [2024-12-14 03:17:43.535330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.613 [2024-12-14 03:17:43.535338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.613 [2024-12-14 03:17:43.535345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.613 [2024-12-14 03:17:43.535351] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.613 [2024-12-14 03:17:43.547406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.613 [2024-12-14 03:17:43.547792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.613 [2024-12-14 03:17:43.547807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.613 [2024-12-14 03:17:43.547814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.613 [2024-12-14 03:17:43.547973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.613 [2024-12-14 03:17:43.548133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.613 [2024-12-14 03:17:43.548140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.613 [2024-12-14 03:17:43.548146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.613 [2024-12-14 03:17:43.548152] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.613 [2024-12-14 03:17:43.560362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.613 [2024-12-14 03:17:43.560784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.613 [2024-12-14 03:17:43.560800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.613 [2024-12-14 03:17:43.560808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.614 [2024-12-14 03:17:43.560981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.614 [2024-12-14 03:17:43.561153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.614 [2024-12-14 03:17:43.561162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.614 [2024-12-14 03:17:43.561168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.614 [2024-12-14 03:17:43.561175] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.614 [2024-12-14 03:17:43.573363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.614 [2024-12-14 03:17:43.573818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.614 [2024-12-14 03:17:43.573862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.614 [2024-12-14 03:17:43.573886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.614 [2024-12-14 03:17:43.574305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.614 [2024-12-14 03:17:43.574484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.614 [2024-12-14 03:17:43.574493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.614 [2024-12-14 03:17:43.574500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.614 [2024-12-14 03:17:43.574506] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.614 [2024-12-14 03:17:43.586543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.614 [2024-12-14 03:17:43.586953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.614 [2024-12-14 03:17:43.586970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.614 [2024-12-14 03:17:43.586978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.614 [2024-12-14 03:17:43.587147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.614 [2024-12-14 03:17:43.587323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.614 [2024-12-14 03:17:43.587332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.614 [2024-12-14 03:17:43.587338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.614 [2024-12-14 03:17:43.587345] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.614 [2024-12-14 03:17:43.599405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.614 [2024-12-14 03:17:43.599801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.614 [2024-12-14 03:17:43.599818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.614 [2024-12-14 03:17:43.599825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.614 [2024-12-14 03:17:43.599984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.614 [2024-12-14 03:17:43.600143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.614 [2024-12-14 03:17:43.600151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.614 [2024-12-14 03:17:43.600157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.614 [2024-12-14 03:17:43.600163] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.614 [2024-12-14 03:17:43.612196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.614 [2024-12-14 03:17:43.612606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.614 [2024-12-14 03:17:43.612623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.614 [2024-12-14 03:17:43.612634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.614 [2024-12-14 03:17:43.612802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.614 [2024-12-14 03:17:43.612971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.614 [2024-12-14 03:17:43.612979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.614 [2024-12-14 03:17:43.612986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.614 [2024-12-14 03:17:43.612992] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.614 [2024-12-14 03:17:43.624936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.614 [2024-12-14 03:17:43.625322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.614 [2024-12-14 03:17:43.625339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.614 [2024-12-14 03:17:43.625347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.614 [2024-12-14 03:17:43.625515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.614 [2024-12-14 03:17:43.625682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.614 [2024-12-14 03:17:43.625690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.614 [2024-12-14 03:17:43.625697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.614 [2024-12-14 03:17:43.625703] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.614 [2024-12-14 03:17:43.637678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.614 [2024-12-14 03:17:43.638069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.614 [2024-12-14 03:17:43.638086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.614 [2024-12-14 03:17:43.638093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.614 [2024-12-14 03:17:43.638252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.614 [2024-12-14 03:17:43.638438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.614 [2024-12-14 03:17:43.638447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.614 [2024-12-14 03:17:43.638453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.614 [2024-12-14 03:17:43.638460] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.614 [2024-12-14 03:17:43.650406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.614 [2024-12-14 03:17:43.650792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.614 [2024-12-14 03:17:43.650808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.614 [2024-12-14 03:17:43.650814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.614 [2024-12-14 03:17:43.650974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.614 [2024-12-14 03:17:43.651135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.614 [2024-12-14 03:17:43.651143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.614 [2024-12-14 03:17:43.651149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.614 [2024-12-14 03:17:43.651155] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.614 [2024-12-14 03:17:43.663226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.614 [2024-12-14 03:17:43.663650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.614 [2024-12-14 03:17:43.663694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.614 [2024-12-14 03:17:43.663718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.614 [2024-12-14 03:17:43.664303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.614 [2024-12-14 03:17:43.664746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.614 [2024-12-14 03:17:43.664755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.614 [2024-12-14 03:17:43.664761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.614 [2024-12-14 03:17:43.664768] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.614 [2024-12-14 03:17:43.676065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.614 [2024-12-14 03:17:43.676474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.614 [2024-12-14 03:17:43.676491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.614 [2024-12-14 03:17:43.676499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.614 [2024-12-14 03:17:43.676666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.614 [2024-12-14 03:17:43.676834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.614 [2024-12-14 03:17:43.676842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.614 [2024-12-14 03:17:43.676849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.614 [2024-12-14 03:17:43.676855] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.614 [2024-12-14 03:17:43.688894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.614 [2024-12-14 03:17:43.689284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.614 [2024-12-14 03:17:43.689300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.614 [2024-12-14 03:17:43.689307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.614 [2024-12-14 03:17:43.689497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.614 [2024-12-14 03:17:43.689664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.615 [2024-12-14 03:17:43.689673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.615 [2024-12-14 03:17:43.689682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.615 [2024-12-14 03:17:43.689689] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.615 7293.50 IOPS, 28.49 MiB/s [2024-12-14T02:17:43.748Z] [2024-12-14 03:17:43.701712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.615 [2024-12-14 03:17:43.702134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.615 [2024-12-14 03:17:43.702179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.615 [2024-12-14 03:17:43.702202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.615 [2024-12-14 03:17:43.702708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.615 [2024-12-14 03:17:43.702877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.615 [2024-12-14 03:17:43.702886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.615 [2024-12-14 03:17:43.702892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.615 [2024-12-14 03:17:43.702898] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.615 [2024-12-14 03:17:43.714479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.615 [2024-12-14 03:17:43.714906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.615 [2024-12-14 03:17:43.714950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.615 [2024-12-14 03:17:43.714974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.615 [2024-12-14 03:17:43.715534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.615 [2024-12-14 03:17:43.715703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.615 [2024-12-14 03:17:43.715711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.615 [2024-12-14 03:17:43.715717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.615 [2024-12-14 03:17:43.715723] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.615 [2024-12-14 03:17:43.727298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.615 [2024-12-14 03:17:43.727705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.615 [2024-12-14 03:17:43.727752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.615 [2024-12-14 03:17:43.727776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.615 [2024-12-14 03:17:43.728376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.615 [2024-12-14 03:17:43.728913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.615 [2024-12-14 03:17:43.728930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.615 [2024-12-14 03:17:43.728944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.615 [2024-12-14 03:17:43.728958] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.615 [2024-12-14 03:17:43.742427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.615 [2024-12-14 03:17:43.742926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.615 [2024-12-14 03:17:43.742971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.615 [2024-12-14 03:17:43.742994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.615 [2024-12-14 03:17:43.743592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.615 [2024-12-14 03:17:43.744040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.615 [2024-12-14 03:17:43.744051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.615 [2024-12-14 03:17:43.744061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.615 [2024-12-14 03:17:43.744070] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.875 [2024-12-14 03:17:43.755419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.875 [2024-12-14 03:17:43.755845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.875 [2024-12-14 03:17:43.755861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.875 [2024-12-14 03:17:43.755868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.875 [2024-12-14 03:17:43.756036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.875 [2024-12-14 03:17:43.756204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.875 [2024-12-14 03:17:43.756212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.875 [2024-12-14 03:17:43.756218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.875 [2024-12-14 03:17:43.756224] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.875 [2024-12-14 03:17:43.768260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.875 [2024-12-14 03:17:43.768677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.875 [2024-12-14 03:17:43.768693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.875 [2024-12-14 03:17:43.768700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.875 [2024-12-14 03:17:43.768868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.875 [2024-12-14 03:17:43.769037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.875 [2024-12-14 03:17:43.769045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.875 [2024-12-14 03:17:43.769051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.875 [2024-12-14 03:17:43.769057] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.875 [2024-12-14 03:17:43.781106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.875 [2024-12-14 03:17:43.781546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.875 [2024-12-14 03:17:43.781590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.875 [2024-12-14 03:17:43.781621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.875 [2024-12-14 03:17:43.782091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.875 [2024-12-14 03:17:43.782260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.875 [2024-12-14 03:17:43.782268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.875 [2024-12-14 03:17:43.782274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.875 [2024-12-14 03:17:43.782281] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.875 [2024-12-14 03:17:43.793894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.875 [2024-12-14 03:17:43.794345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.875 [2024-12-14 03:17:43.794391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.875 [2024-12-14 03:17:43.794415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.875 [2024-12-14 03:17:43.794923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.875 [2024-12-14 03:17:43.795091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.875 [2024-12-14 03:17:43.795099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.875 [2024-12-14 03:17:43.795106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.875 [2024-12-14 03:17:43.795112] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.875 [2024-12-14 03:17:43.806717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.875 [2024-12-14 03:17:43.807152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.876 [2024-12-14 03:17:43.807169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.876 [2024-12-14 03:17:43.807176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.876 [2024-12-14 03:17:43.807352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.876 [2024-12-14 03:17:43.807520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.876 [2024-12-14 03:17:43.807528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.876 [2024-12-14 03:17:43.807535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.876 [2024-12-14 03:17:43.807541] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.876 [2024-12-14 03:17:43.819582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.876 [2024-12-14 03:17:43.819964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.876 [2024-12-14 03:17:43.819981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.876 [2024-12-14 03:17:43.819989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.876 [2024-12-14 03:17:43.820157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.876 [2024-12-14 03:17:43.820335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.876 [2024-12-14 03:17:43.820360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.876 [2024-12-14 03:17:43.820367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.876 [2024-12-14 03:17:43.820374] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.876 [2024-12-14 03:17:43.832527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.876 [2024-12-14 03:17:43.832967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.876 [2024-12-14 03:17:43.832984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.876 [2024-12-14 03:17:43.832992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.876 [2024-12-14 03:17:43.833165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.876 [2024-12-14 03:17:43.833345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.876 [2024-12-14 03:17:43.833354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.876 [2024-12-14 03:17:43.833360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.876 [2024-12-14 03:17:43.833367] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.876 [2024-12-14 03:17:43.845252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.876 [2024-12-14 03:17:43.845706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.876 [2024-12-14 03:17:43.845752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.876 [2024-12-14 03:17:43.845776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.876 [2024-12-14 03:17:43.846247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.876 [2024-12-14 03:17:43.846423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.876 [2024-12-14 03:17:43.846432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.876 [2024-12-14 03:17:43.846438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.876 [2024-12-14 03:17:43.846444] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.876 [2024-12-14 03:17:43.858124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.876 [2024-12-14 03:17:43.858567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.876 [2024-12-14 03:17:43.858584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.876 [2024-12-14 03:17:43.858591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.876 [2024-12-14 03:17:43.858760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.876 [2024-12-14 03:17:43.858927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.876 [2024-12-14 03:17:43.858935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.876 [2024-12-14 03:17:43.858946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.876 [2024-12-14 03:17:43.858953] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.876 [2024-12-14 03:17:43.870948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.876 [2024-12-14 03:17:43.871394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.876 [2024-12-14 03:17:43.871411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.876 [2024-12-14 03:17:43.871418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.876 [2024-12-14 03:17:43.871586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.876 [2024-12-14 03:17:43.871754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.876 [2024-12-14 03:17:43.871762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.876 [2024-12-14 03:17:43.871768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.876 [2024-12-14 03:17:43.871774] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.876 [2024-12-14 03:17:43.883861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.876 [2024-12-14 03:17:43.884287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.876 [2024-12-14 03:17:43.884303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.876 [2024-12-14 03:17:43.884310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.876 [2024-12-14 03:17:43.884485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.876 [2024-12-14 03:17:43.884653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.876 [2024-12-14 03:17:43.884661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.876 [2024-12-14 03:17:43.884667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.876 [2024-12-14 03:17:43.884674] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.876 [2024-12-14 03:17:43.896776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.876 [2024-12-14 03:17:43.897222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.876 [2024-12-14 03:17:43.897266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.876 [2024-12-14 03:17:43.897289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.876 [2024-12-14 03:17:43.897887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.876 [2024-12-14 03:17:43.898278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.876 [2024-12-14 03:17:43.898286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.876 [2024-12-14 03:17:43.898292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.876 [2024-12-14 03:17:43.898299] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.876 [2024-12-14 03:17:43.909591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.876 [2024-12-14 03:17:43.909983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.876 [2024-12-14 03:17:43.909999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.876 [2024-12-14 03:17:43.910006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.876 [2024-12-14 03:17:43.910165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.876 [2024-12-14 03:17:43.910331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.876 [2024-12-14 03:17:43.910339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.876 [2024-12-14 03:17:43.910345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.876 [2024-12-14 03:17:43.910351] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.876 [2024-12-14 03:17:43.922453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.876 [2024-12-14 03:17:43.922848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.876 [2024-12-14 03:17:43.922863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.876 [2024-12-14 03:17:43.922870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.876 [2024-12-14 03:17:43.923029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.876 [2024-12-14 03:17:43.923189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.876 [2024-12-14 03:17:43.923196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.876 [2024-12-14 03:17:43.923202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.876 [2024-12-14 03:17:43.923208] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.876 [2024-12-14 03:17:43.935186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.876 [2024-12-14 03:17:43.935620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.876 [2024-12-14 03:17:43.935638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.876 [2024-12-14 03:17:43.935645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.877 [2024-12-14 03:17:43.935814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.877 [2024-12-14 03:17:43.935981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.877 [2024-12-14 03:17:43.935989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.877 [2024-12-14 03:17:43.935995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.877 [2024-12-14 03:17:43.936002] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.877 [2024-12-14 03:17:43.948020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.877 [2024-12-14 03:17:43.948438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.877 [2024-12-14 03:17:43.948454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.877 [2024-12-14 03:17:43.948464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.877 [2024-12-14 03:17:43.948624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.877 [2024-12-14 03:17:43.948782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.877 [2024-12-14 03:17:43.948790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.877 [2024-12-14 03:17:43.948796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.877 [2024-12-14 03:17:43.948802] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.877 [2024-12-14 03:17:43.960754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.877 [2024-12-14 03:17:43.961164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.877 [2024-12-14 03:17:43.961180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.877 [2024-12-14 03:17:43.961186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.877 [2024-12-14 03:17:43.961368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.877 [2024-12-14 03:17:43.961537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.877 [2024-12-14 03:17:43.961545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.877 [2024-12-14 03:17:43.961551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.877 [2024-12-14 03:17:43.961558] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.877 [2024-12-14 03:17:43.973596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.877 [2024-12-14 03:17:43.974019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.877 [2024-12-14 03:17:43.974063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.877 [2024-12-14 03:17:43.974087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.877 [2024-12-14 03:17:43.974684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.877 [2024-12-14 03:17:43.975273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.877 [2024-12-14 03:17:43.975282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.877 [2024-12-14 03:17:43.975289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.877 [2024-12-14 03:17:43.975294] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.877 [2024-12-14 03:17:43.986461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.877 [2024-12-14 03:17:43.986829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.877 [2024-12-14 03:17:43.986846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.877 [2024-12-14 03:17:43.986854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.877 [2024-12-14 03:17:43.987022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.877 [2024-12-14 03:17:43.987197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.877 [2024-12-14 03:17:43.987206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.877 [2024-12-14 03:17:43.987212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.877 [2024-12-14 03:17:43.987218] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.877 [2024-12-14 03:17:43.999262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.877 [2024-12-14 03:17:43.999563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.877 [2024-12-14 03:17:43.999580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:28.877 [2024-12-14 03:17:43.999587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:28.877 [2024-12-14 03:17:43.999755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:28.877 [2024-12-14 03:17:43.999923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.877 [2024-12-14 03:17:43.999931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.877 [2024-12-14 03:17:43.999938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.877 [2024-12-14 03:17:43.999944] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.191 [2024-12-14 03:17:44.012302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.191 [2024-12-14 03:17:44.012729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.191 [2024-12-14 03:17:44.012745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.191 [2024-12-14 03:17:44.012752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.191 [2024-12-14 03:17:44.012925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.191 [2024-12-14 03:17:44.013098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.191 [2024-12-14 03:17:44.013106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.191 [2024-12-14 03:17:44.013113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.191 [2024-12-14 03:17:44.013119] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.191 [2024-12-14 03:17:44.025125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.191 [2024-12-14 03:17:44.025546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.191 [2024-12-14 03:17:44.025563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.191 [2024-12-14 03:17:44.025570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.191 [2024-12-14 03:17:44.025738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.191 [2024-12-14 03:17:44.025906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.191 [2024-12-14 03:17:44.025914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.192 [2024-12-14 03:17:44.025924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.192 [2024-12-14 03:17:44.025930] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.192 [2024-12-14 03:17:44.037909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.192 [2024-12-14 03:17:44.038298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.192 [2024-12-14 03:17:44.038319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.192 [2024-12-14 03:17:44.038326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.192 [2024-12-14 03:17:44.038485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.192 [2024-12-14 03:17:44.038644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.192 [2024-12-14 03:17:44.038652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.192 [2024-12-14 03:17:44.038658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.192 [2024-12-14 03:17:44.038664] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.192 [2024-12-14 03:17:44.050763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.192 [2024-12-14 03:17:44.051185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.192 [2024-12-14 03:17:44.051229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.192 [2024-12-14 03:17:44.051252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.192 [2024-12-14 03:17:44.051705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.192 [2024-12-14 03:17:44.051874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.192 [2024-12-14 03:17:44.051883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.192 [2024-12-14 03:17:44.051889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.192 [2024-12-14 03:17:44.051895] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.192 [2024-12-14 03:17:44.063572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.192 [2024-12-14 03:17:44.063970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.192 [2024-12-14 03:17:44.064015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.192 [2024-12-14 03:17:44.064038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.192 [2024-12-14 03:17:44.064491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.192 [2024-12-14 03:17:44.064660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.192 [2024-12-14 03:17:44.064668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.192 [2024-12-14 03:17:44.064674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.192 [2024-12-14 03:17:44.064680] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.192 [2024-12-14 03:17:44.076410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.192 [2024-12-14 03:17:44.076843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.192 [2024-12-14 03:17:44.076860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.192 [2024-12-14 03:17:44.076867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.192 [2024-12-14 03:17:44.077035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.192 [2024-12-14 03:17:44.077203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.192 [2024-12-14 03:17:44.077211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.192 [2024-12-14 03:17:44.077217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.192 [2024-12-14 03:17:44.077223] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.192 [2024-12-14 03:17:44.089378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.192 [2024-12-14 03:17:44.089804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.192 [2024-12-14 03:17:44.089821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.192 [2024-12-14 03:17:44.089828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.192 [2024-12-14 03:17:44.089996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.192 [2024-12-14 03:17:44.090163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.192 [2024-12-14 03:17:44.090171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.192 [2024-12-14 03:17:44.090177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.192 [2024-12-14 03:17:44.090184] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.192 [2024-12-14 03:17:44.102339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.192 [2024-12-14 03:17:44.102665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.192 [2024-12-14 03:17:44.102681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.192 [2024-12-14 03:17:44.102688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.192 [2024-12-14 03:17:44.102856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.192 [2024-12-14 03:17:44.103023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.192 [2024-12-14 03:17:44.103032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.192 [2024-12-14 03:17:44.103038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.192 [2024-12-14 03:17:44.103044] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.192 [2024-12-14 03:17:44.115091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.192 [2024-12-14 03:17:44.115483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.192 [2024-12-14 03:17:44.115500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.192 [2024-12-14 03:17:44.115510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.192 [2024-12-14 03:17:44.115669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.192 [2024-12-14 03:17:44.115828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.192 [2024-12-14 03:17:44.115836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.192 [2024-12-14 03:17:44.115842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.192 [2024-12-14 03:17:44.115848] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.192 [2024-12-14 03:17:44.127867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.192 [2024-12-14 03:17:44.128288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.192 [2024-12-14 03:17:44.128305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.192 [2024-12-14 03:17:44.128319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.192 [2024-12-14 03:17:44.128487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.192 [2024-12-14 03:17:44.128655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.192 [2024-12-14 03:17:44.128663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.192 [2024-12-14 03:17:44.128670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.192 [2024-12-14 03:17:44.128676] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.192 [2024-12-14 03:17:44.140707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.192 [2024-12-14 03:17:44.141122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.192 [2024-12-14 03:17:44.141137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.192 [2024-12-14 03:17:44.141144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.192 [2024-12-14 03:17:44.141303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.192 [2024-12-14 03:17:44.141490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.192 [2024-12-14 03:17:44.141499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.192 [2024-12-14 03:17:44.141505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.192 [2024-12-14 03:17:44.141511] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.192 [2024-12-14 03:17:44.153647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.192 [2024-12-14 03:17:44.153993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.192 [2024-12-14 03:17:44.154010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.192 [2024-12-14 03:17:44.154017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.192 [2024-12-14 03:17:44.154185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.192 [2024-12-14 03:17:44.154361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.192 [2024-12-14 03:17:44.154370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.192 [2024-12-14 03:17:44.154377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.193 [2024-12-14 03:17:44.154383] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.193 [2024-12-14 03:17:44.166421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.193 [2024-12-14 03:17:44.166867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.193 [2024-12-14 03:17:44.166910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.193 [2024-12-14 03:17:44.166933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.193 [2024-12-14 03:17:44.167529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.193 [2024-12-14 03:17:44.167965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.193 [2024-12-14 03:17:44.167973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.193 [2024-12-14 03:17:44.167979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.193 [2024-12-14 03:17:44.167986] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.193 [2024-12-14 03:17:44.179281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.193 [2024-12-14 03:17:44.179785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.193 [2024-12-14 03:17:44.179803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.193 [2024-12-14 03:17:44.179812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.193 [2024-12-14 03:17:44.179981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.193 [2024-12-14 03:17:44.180149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.193 [2024-12-14 03:17:44.180158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.193 [2024-12-14 03:17:44.180164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.193 [2024-12-14 03:17:44.180171] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.193 [2024-12-14 03:17:44.192076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.193 [2024-12-14 03:17:44.192431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.193 [2024-12-14 03:17:44.192447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.193 [2024-12-14 03:17:44.192455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.193 [2024-12-14 03:17:44.192623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.193 [2024-12-14 03:17:44.192791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.193 [2024-12-14 03:17:44.192800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.193 [2024-12-14 03:17:44.192809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.193 [2024-12-14 03:17:44.192816] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.193 [2024-12-14 03:17:44.204989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.193 [2024-12-14 03:17:44.205438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.193 [2024-12-14 03:17:44.205514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.193 [2024-12-14 03:17:44.205540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.193 [2024-12-14 03:17:44.206037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.193 [2024-12-14 03:17:44.206205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.193 [2024-12-14 03:17:44.206213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.193 [2024-12-14 03:17:44.206220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.193 [2024-12-14 03:17:44.206226] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.193 [2024-12-14 03:17:44.217832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.193 [2024-12-14 03:17:44.218245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.193 [2024-12-14 03:17:44.218262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.193 [2024-12-14 03:17:44.218269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.193 [2024-12-14 03:17:44.218442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.193 [2024-12-14 03:17:44.218610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.193 [2024-12-14 03:17:44.218618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.193 [2024-12-14 03:17:44.218624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.193 [2024-12-14 03:17:44.218630] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.193 [2024-12-14 03:17:44.230697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.193 [2024-12-14 03:17:44.231067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.193 [2024-12-14 03:17:44.231083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.193 [2024-12-14 03:17:44.231090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.193 [2024-12-14 03:17:44.231258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.193 [2024-12-14 03:17:44.231431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.193 [2024-12-14 03:17:44.231440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.193 [2024-12-14 03:17:44.231446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.193 [2024-12-14 03:17:44.231452] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.193 [2024-12-14 03:17:44.243576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.193 [2024-12-14 03:17:44.243850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.193 [2024-12-14 03:17:44.243868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.193 [2024-12-14 03:17:44.243875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.193 [2024-12-14 03:17:44.244043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.193 [2024-12-14 03:17:44.244210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.193 [2024-12-14 03:17:44.244218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.193 [2024-12-14 03:17:44.244225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.193 [2024-12-14 03:17:44.244231] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.193 [2024-12-14 03:17:44.256413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.193 [2024-12-14 03:17:44.256837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.193 [2024-12-14 03:17:44.256853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.193 [2024-12-14 03:17:44.256860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.193 [2024-12-14 03:17:44.257033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.193 [2024-12-14 03:17:44.257191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.193 [2024-12-14 03:17:44.257199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.193 [2024-12-14 03:17:44.257205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.193 [2024-12-14 03:17:44.257210] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.193 [2024-12-14 03:17:44.269188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.193 [2024-12-14 03:17:44.269626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.193 [2024-12-14 03:17:44.269643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.193 [2024-12-14 03:17:44.269650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.193 [2024-12-14 03:17:44.269818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.193 [2024-12-14 03:17:44.269985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.193 [2024-12-14 03:17:44.269993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.193 [2024-12-14 03:17:44.270000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.193 [2024-12-14 03:17:44.270006] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.193 [2024-12-14 03:17:44.282056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.193 [2024-12-14 03:17:44.282507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.193 [2024-12-14 03:17:44.282523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.193 [2024-12-14 03:17:44.282533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.193 [2024-12-14 03:17:44.282692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.193 [2024-12-14 03:17:44.282852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.193 [2024-12-14 03:17:44.282860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.193 [2024-12-14 03:17:44.282865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.193 [2024-12-14 03:17:44.282871] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.193 [2024-12-14 03:17:44.294856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.194 [2024-12-14 03:17:44.295286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.194 [2024-12-14 03:17:44.295302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.194 [2024-12-14 03:17:44.295309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.194 [2024-12-14 03:17:44.295485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.194 [2024-12-14 03:17:44.295653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.194 [2024-12-14 03:17:44.295661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.194 [2024-12-14 03:17:44.295667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.194 [2024-12-14 03:17:44.295674] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.194 [2024-12-14 03:17:44.307688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.194 [2024-12-14 03:17:44.308103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.194 [2024-12-14 03:17:44.308119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.194 [2024-12-14 03:17:44.308126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.194 [2024-12-14 03:17:44.308294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.194 [2024-12-14 03:17:44.308468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.194 [2024-12-14 03:17:44.308477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.194 [2024-12-14 03:17:44.308483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.194 [2024-12-14 03:17:44.308489] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.194 [2024-12-14 03:17:44.320660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.466 [2024-12-14 03:17:44.321090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.466 [2024-12-14 03:17:44.321107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.466 [2024-12-14 03:17:44.321114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.466 [2024-12-14 03:17:44.321287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.466 [2024-12-14 03:17:44.321470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.466 [2024-12-14 03:17:44.321479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.466 [2024-12-14 03:17:44.321485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.467 [2024-12-14 03:17:44.321492] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.467 [2024-12-14 03:17:44.333702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.467 [2024-12-14 03:17:44.334136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.467 [2024-12-14 03:17:44.334153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.467 [2024-12-14 03:17:44.334160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.467 [2024-12-14 03:17:44.334341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.467 [2024-12-14 03:17:44.334515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.467 [2024-12-14 03:17:44.334523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.467 [2024-12-14 03:17:44.334530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.467 [2024-12-14 03:17:44.334536] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.467 [2024-12-14 03:17:44.346683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.467 [2024-12-14 03:17:44.347095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.467 [2024-12-14 03:17:44.347112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.467 [2024-12-14 03:17:44.347119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.467 [2024-12-14 03:17:44.347292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.467 [2024-12-14 03:17:44.347470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.467 [2024-12-14 03:17:44.347480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.467 [2024-12-14 03:17:44.347486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.467 [2024-12-14 03:17:44.347493] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.467 [2024-12-14 03:17:44.359705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.467 [2024-12-14 03:17:44.360113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.467 [2024-12-14 03:17:44.360130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.467 [2024-12-14 03:17:44.360137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.467 [2024-12-14 03:17:44.360310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.467 [2024-12-14 03:17:44.360490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.467 [2024-12-14 03:17:44.360499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.467 [2024-12-14 03:17:44.360508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.467 [2024-12-14 03:17:44.360515] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.467 [2024-12-14 03:17:44.372725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.467 [2024-12-14 03:17:44.373139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.467 [2024-12-14 03:17:44.373156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.467 [2024-12-14 03:17:44.373163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.467 [2024-12-14 03:17:44.373343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.467 [2024-12-14 03:17:44.373516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.467 [2024-12-14 03:17:44.373524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.467 [2024-12-14 03:17:44.373530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.467 [2024-12-14 03:17:44.373537] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.467 [2024-12-14 03:17:44.385722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.467 [2024-12-14 03:17:44.386152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.467 [2024-12-14 03:17:44.386169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.467 [2024-12-14 03:17:44.386176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.467 [2024-12-14 03:17:44.386356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.467 [2024-12-14 03:17:44.386530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.467 [2024-12-14 03:17:44.386538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.467 [2024-12-14 03:17:44.386545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.467 [2024-12-14 03:17:44.386551] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.467 [2024-12-14 03:17:44.398525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.467 [2024-12-14 03:17:44.398968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.467 [2024-12-14 03:17:44.399013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.467 [2024-12-14 03:17:44.399037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.467 [2024-12-14 03:17:44.399637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.467 [2024-12-14 03:17:44.400227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.467 [2024-12-14 03:17:44.400251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.467 [2024-12-14 03:17:44.400273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.467 [2024-12-14 03:17:44.400293] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.467 [2024-12-14 03:17:44.411323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.467 [2024-12-14 03:17:44.411727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.467 [2024-12-14 03:17:44.411743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.467 [2024-12-14 03:17:44.411750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.467 [2024-12-14 03:17:44.411909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.467 [2024-12-14 03:17:44.412067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.467 [2024-12-14 03:17:44.412075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.467 [2024-12-14 03:17:44.412081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.467 [2024-12-14 03:17:44.412087] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.467 [2024-12-14 03:17:44.424254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.467 [2024-12-14 03:17:44.424686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.467 [2024-12-14 03:17:44.424703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.467 [2024-12-14 03:17:44.424710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.467 [2024-12-14 03:17:44.424878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.467 [2024-12-14 03:17:44.425046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.467 [2024-12-14 03:17:44.425055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.467 [2024-12-14 03:17:44.425061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.467 [2024-12-14 03:17:44.425067] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.467 [2024-12-14 03:17:44.437164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.467 [2024-12-14 03:17:44.437536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.467 [2024-12-14 03:17:44.437553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.467 [2024-12-14 03:17:44.437561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.467 [2024-12-14 03:17:44.437729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.467 [2024-12-14 03:17:44.437897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.467 [2024-12-14 03:17:44.437905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.467 [2024-12-14 03:17:44.437911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.467 [2024-12-14 03:17:44.437917] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.467 [2024-12-14 03:17:44.449895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.467 [2024-12-14 03:17:44.450333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.467 [2024-12-14 03:17:44.450379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.467 [2024-12-14 03:17:44.450411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.467 [2024-12-14 03:17:44.450995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.467 [2024-12-14 03:17:44.451507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.467 [2024-12-14 03:17:44.451516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.467 [2024-12-14 03:17:44.451523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.468 [2024-12-14 03:17:44.451528] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.468 [2024-12-14 03:17:44.462798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.468 [2024-12-14 03:17:44.463216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.468 [2024-12-14 03:17:44.463232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.468 [2024-12-14 03:17:44.463239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.468 [2024-12-14 03:17:44.463415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.468 [2024-12-14 03:17:44.463584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.468 [2024-12-14 03:17:44.463592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.468 [2024-12-14 03:17:44.463599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.468 [2024-12-14 03:17:44.463605] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.468 [2024-12-14 03:17:44.475648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.468 [2024-12-14 03:17:44.476088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.468 [2024-12-14 03:17:44.476133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.468 [2024-12-14 03:17:44.476157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.468 [2024-12-14 03:17:44.476633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.468 [2024-12-14 03:17:44.476802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.468 [2024-12-14 03:17:44.476809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.468 [2024-12-14 03:17:44.476816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.468 [2024-12-14 03:17:44.476822] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.468 [2024-12-14 03:17:44.488423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.468 [2024-12-14 03:17:44.488868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.468 [2024-12-14 03:17:44.488884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.468 [2024-12-14 03:17:44.488892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.468 [2024-12-14 03:17:44.489059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.468 [2024-12-14 03:17:44.489234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.468 [2024-12-14 03:17:44.489242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.468 [2024-12-14 03:17:44.489248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.468 [2024-12-14 03:17:44.489255] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.468 [2024-12-14 03:17:44.501288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.468 [2024-12-14 03:17:44.501670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.468 [2024-12-14 03:17:44.501715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.468 [2024-12-14 03:17:44.501739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.468 [2024-12-14 03:17:44.502338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.468 [2024-12-14 03:17:44.502722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.468 [2024-12-14 03:17:44.502730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.468 [2024-12-14 03:17:44.502736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.468 [2024-12-14 03:17:44.502742] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.468 [2024-12-14 03:17:44.514100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.468 [2024-12-14 03:17:44.514493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.468 [2024-12-14 03:17:44.514510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.468 [2024-12-14 03:17:44.514517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.468 [2024-12-14 03:17:44.514676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.468 [2024-12-14 03:17:44.514835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.468 [2024-12-14 03:17:44.514843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.468 [2024-12-14 03:17:44.514849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.468 [2024-12-14 03:17:44.514855] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.468 [2024-12-14 03:17:44.526885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.468 [2024-12-14 03:17:44.527303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.468 [2024-12-14 03:17:44.527324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.468 [2024-12-14 03:17:44.527331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.468 [2024-12-14 03:17:44.527515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.468 [2024-12-14 03:17:44.527683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.468 [2024-12-14 03:17:44.527692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.468 [2024-12-14 03:17:44.527701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.468 [2024-12-14 03:17:44.527708] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.468 [2024-12-14 03:17:44.539742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.468 [2024-12-14 03:17:44.540158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.468 [2024-12-14 03:17:44.540174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.468 [2024-12-14 03:17:44.540181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.468 [2024-12-14 03:17:44.540362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.468 [2024-12-14 03:17:44.540531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.468 [2024-12-14 03:17:44.540539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.468 [2024-12-14 03:17:44.540546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.468 [2024-12-14 03:17:44.540552] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.468 [2024-12-14 03:17:44.552523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.468 [2024-12-14 03:17:44.552938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.468 [2024-12-14 03:17:44.552955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.468 [2024-12-14 03:17:44.552962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.468 [2024-12-14 03:17:44.553121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.468 [2024-12-14 03:17:44.553279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.468 [2024-12-14 03:17:44.553287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.468 [2024-12-14 03:17:44.553293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.468 [2024-12-14 03:17:44.553299] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.468 [2024-12-14 03:17:44.565375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.468 [2024-12-14 03:17:44.565767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.468 [2024-12-14 03:17:44.565783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.468 [2024-12-14 03:17:44.565790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.468 [2024-12-14 03:17:44.565949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.468 [2024-12-14 03:17:44.566108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.468 [2024-12-14 03:17:44.566116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.468 [2024-12-14 03:17:44.566122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.468 [2024-12-14 03:17:44.566128] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.468 [2024-12-14 03:17:44.578167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.468 [2024-12-14 03:17:44.578615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.468 [2024-12-14 03:17:44.578633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.468 [2024-12-14 03:17:44.578640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.468 [2024-12-14 03:17:44.578809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.468 [2024-12-14 03:17:44.578976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.468 [2024-12-14 03:17:44.578985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.468 [2024-12-14 03:17:44.578991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.468 [2024-12-14 03:17:44.578997] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.468 [2024-12-14 03:17:44.590949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.468 [2024-12-14 03:17:44.591369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.469 [2024-12-14 03:17:44.591387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.469 [2024-12-14 03:17:44.591395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.469 [2024-12-14 03:17:44.591569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.469 [2024-12-14 03:17:44.591742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.469 [2024-12-14 03:17:44.591751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.469 [2024-12-14 03:17:44.591757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.469 [2024-12-14 03:17:44.591764] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.746 [2024-12-14 03:17:44.603970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.746 [2024-12-14 03:17:44.604398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-12-14 03:17:44.604416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.746 [2024-12-14 03:17:44.604423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.746 [2024-12-14 03:17:44.604596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.746 [2024-12-14 03:17:44.604774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.746 [2024-12-14 03:17:44.604783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.746 [2024-12-14 03:17:44.604789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.746 [2024-12-14 03:17:44.604796] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.746 [2024-12-14 03:17:44.617021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.746 [2024-12-14 03:17:44.617300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-12-14 03:17:44.617321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.746 [2024-12-14 03:17:44.617332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.746 [2024-12-14 03:17:44.617506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.746 [2024-12-14 03:17:44.617680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.746 [2024-12-14 03:17:44.617688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.746 [2024-12-14 03:17:44.617694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.746 [2024-12-14 03:17:44.617701] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.746 [2024-12-14 03:17:44.629983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.746 [2024-12-14 03:17:44.630437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-12-14 03:17:44.630482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.746 [2024-12-14 03:17:44.630506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.746 [2024-12-14 03:17:44.630717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.746 [2024-12-14 03:17:44.630886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.746 [2024-12-14 03:17:44.630895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.746 [2024-12-14 03:17:44.630901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.746 [2024-12-14 03:17:44.630907] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.746 [2024-12-14 03:17:44.642762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.746 [2024-12-14 03:17:44.643196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-12-14 03:17:44.643240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.746 [2024-12-14 03:17:44.643263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.746 [2024-12-14 03:17:44.643712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.746 [2024-12-14 03:17:44.643881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.746 [2024-12-14 03:17:44.643889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.746 [2024-12-14 03:17:44.643895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.746 [2024-12-14 03:17:44.643902] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.746 [2024-12-14 03:17:44.655628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.746 [2024-12-14 03:17:44.656045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-12-14 03:17:44.656088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.746 [2024-12-14 03:17:44.656112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.746 [2024-12-14 03:17:44.656681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.746 [2024-12-14 03:17:44.656853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.746 [2024-12-14 03:17:44.656861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.746 [2024-12-14 03:17:44.656868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.746 [2024-12-14 03:17:44.656874] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.746 [2024-12-14 03:17:44.668456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.746 [2024-12-14 03:17:44.668867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-12-14 03:17:44.668883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.746 [2024-12-14 03:17:44.668890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.746 [2024-12-14 03:17:44.669049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.746 [2024-12-14 03:17:44.669208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.746 [2024-12-14 03:17:44.669215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.746 [2024-12-14 03:17:44.669221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.746 [2024-12-14 03:17:44.669227] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.746 [2024-12-14 03:17:44.681220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.746 [2024-12-14 03:17:44.681666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-12-14 03:17:44.681684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.746 [2024-12-14 03:17:44.681691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.746 [2024-12-14 03:17:44.681859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.746 [2024-12-14 03:17:44.682028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.746 [2024-12-14 03:17:44.682036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.746 [2024-12-14 03:17:44.682042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.746 [2024-12-14 03:17:44.682048] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.746 [2024-12-14 03:17:44.693947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.746 [2024-12-14 03:17:44.694363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-12-14 03:17:44.694408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.746 [2024-12-14 03:17:44.694432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.746 [2024-12-14 03:17:44.694971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.746 [2024-12-14 03:17:44.695130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.746 [2024-12-14 03:17:44.695138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.746 [2024-12-14 03:17:44.695147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.746 [2024-12-14 03:17:44.695153] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.746 5834.80 IOPS, 22.79 MiB/s [2024-12-14T02:17:44.879Z] [2024-12-14 03:17:44.706682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.746 [2024-12-14 03:17:44.707077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-12-14 03:17:44.707131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.747 [2024-12-14 03:17:44.707155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.747 [2024-12-14 03:17:44.707753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.747 [2024-12-14 03:17:44.707992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.747 [2024-12-14 03:17:44.707999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.747 [2024-12-14 03:17:44.708006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.747 [2024-12-14 03:17:44.708012] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.747 [2024-12-14 03:17:44.719461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.747 [2024-12-14 03:17:44.719889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-12-14 03:17:44.719933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.747 [2024-12-14 03:17:44.719957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.747 [2024-12-14 03:17:44.720371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.747 [2024-12-14 03:17:44.720541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.747 [2024-12-14 03:17:44.720549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.747 [2024-12-14 03:17:44.720555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.747 [2024-12-14 03:17:44.720562] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.747 [2024-12-14 03:17:44.732297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.747 [2024-12-14 03:17:44.732745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-12-14 03:17:44.732791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.747 [2024-12-14 03:17:44.732814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.747 [2024-12-14 03:17:44.733233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.747 [2024-12-14 03:17:44.733417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.747 [2024-12-14 03:17:44.733426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.747 [2024-12-14 03:17:44.733432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.747 [2024-12-14 03:17:44.733438] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.747 [2024-12-14 03:17:44.745026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.747 [2024-12-14 03:17:44.745452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-12-14 03:17:44.745498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.747 [2024-12-14 03:17:44.745522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.747 [2024-12-14 03:17:44.746104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.747 [2024-12-14 03:17:44.746292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.747 [2024-12-14 03:17:44.746300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.747 [2024-12-14 03:17:44.746306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.747 [2024-12-14 03:17:44.746318] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.747 [2024-12-14 03:17:44.757762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.747 [2024-12-14 03:17:44.758197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-12-14 03:17:44.758242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.747 [2024-12-14 03:17:44.758265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.747 [2024-12-14 03:17:44.758864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.747 [2024-12-14 03:17:44.759396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.747 [2024-12-14 03:17:44.759404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.747 [2024-12-14 03:17:44.759411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.747 [2024-12-14 03:17:44.759417] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.747 [2024-12-14 03:17:44.770597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.747 [2024-12-14 03:17:44.771040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-12-14 03:17:44.771057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.747 [2024-12-14 03:17:44.771064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.747 [2024-12-14 03:17:44.771233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.747 [2024-12-14 03:17:44.771407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.747 [2024-12-14 03:17:44.771416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.747 [2024-12-14 03:17:44.771422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.747 [2024-12-14 03:17:44.771428] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.747 [2024-12-14 03:17:44.783473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.747 [2024-12-14 03:17:44.783890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-12-14 03:17:44.783906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.747 [2024-12-14 03:17:44.783916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.747 [2024-12-14 03:17:44.784075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.747 [2024-12-14 03:17:44.784234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.747 [2024-12-14 03:17:44.784242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.747 [2024-12-14 03:17:44.784248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.747 [2024-12-14 03:17:44.784254] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.747 [2024-12-14 03:17:44.796322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.747 [2024-12-14 03:17:44.796742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-12-14 03:17:44.796787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.747 [2024-12-14 03:17:44.796810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.747 [2024-12-14 03:17:44.797292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.747 [2024-12-14 03:17:44.797479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.747 [2024-12-14 03:17:44.797488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.747 [2024-12-14 03:17:44.797495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.747 [2024-12-14 03:17:44.797501] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.747 [2024-12-14 03:17:44.809082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.747 [2024-12-14 03:17:44.809493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-12-14 03:17:44.809538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.747 [2024-12-14 03:17:44.809562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.747 [2024-12-14 03:17:44.810146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.747 [2024-12-14 03:17:44.810589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.747 [2024-12-14 03:17:44.810598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.747 [2024-12-14 03:17:44.810604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.747 [2024-12-14 03:17:44.810610] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.747 [2024-12-14 03:17:44.824167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.747 [2024-12-14 03:17:44.824704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-12-14 03:17:44.824749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.747 [2024-12-14 03:17:44.824773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.747 [2024-12-14 03:17:44.825268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.747 [2024-12-14 03:17:44.825533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.747 [2024-12-14 03:17:44.825546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.747 [2024-12-14 03:17:44.825556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.747 [2024-12-14 03:17:44.825565] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.747 [2024-12-14 03:17:44.837115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.747 [2024-12-14 03:17:44.837543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-12-14 03:17:44.837560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.747 [2024-12-14 03:17:44.837567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.747 [2024-12-14 03:17:44.837736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.748 [2024-12-14 03:17:44.837903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.748 [2024-12-14 03:17:44.837911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.748 [2024-12-14 03:17:44.837917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.748 [2024-12-14 03:17:44.837924] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.748 [2024-12-14 03:17:44.849913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.748 [2024-12-14 03:17:44.850362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-12-14 03:17:44.850380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.748 [2024-12-14 03:17:44.850387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.748 [2024-12-14 03:17:44.850561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.748 [2024-12-14 03:17:44.850733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.748 [2024-12-14 03:17:44.850741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.748 [2024-12-14 03:17:44.850748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.748 [2024-12-14 03:17:44.850754] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.748 [2024-12-14 03:17:44.862943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.748 [2024-12-14 03:17:44.863366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-12-14 03:17:44.863384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:29.748 [2024-12-14 03:17:44.863391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:29.748 [2024-12-14 03:17:44.863565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:29.748 [2024-12-14 03:17:44.863739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.748 [2024-12-14 03:17:44.863747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.748 [2024-12-14 03:17:44.863757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.748 [2024-12-14 03:17:44.863764] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.033 [2024-12-14 03:17:44.875991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.033 [2024-12-14 03:17:44.876426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.033 [2024-12-14 03:17:44.876444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.033 [2024-12-14 03:17:44.876451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.033 [2024-12-14 03:17:44.876625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.033 [2024-12-14 03:17:44.876798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.033 [2024-12-14 03:17:44.876806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.033 [2024-12-14 03:17:44.876813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.033 [2024-12-14 03:17:44.876819] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.033 [2024-12-14 03:17:44.889060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.033 [2024-12-14 03:17:44.889482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.033 [2024-12-14 03:17:44.889500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.033 [2024-12-14 03:17:44.889507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.033 [2024-12-14 03:17:44.889687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.033 [2024-12-14 03:17:44.889860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.033 [2024-12-14 03:17:44.889869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.033 [2024-12-14 03:17:44.889875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.033 [2024-12-14 03:17:44.889882] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.033 [2024-12-14 03:17:44.902250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.033 [2024-12-14 03:17:44.902666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.033 [2024-12-14 03:17:44.902684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.033 [2024-12-14 03:17:44.902691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.033 [2024-12-14 03:17:44.902865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.033 [2024-12-14 03:17:44.903038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.033 [2024-12-14 03:17:44.903047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.033 [2024-12-14 03:17:44.903053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.033 [2024-12-14 03:17:44.903059] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.033 [2024-12-14 03:17:44.915069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.033 [2024-12-14 03:17:44.915421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.033 [2024-12-14 03:17:44.915438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.033 [2024-12-14 03:17:44.915445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.033 [2024-12-14 03:17:44.915613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.033 [2024-12-14 03:17:44.915780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.033 [2024-12-14 03:17:44.915789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.033 [2024-12-14 03:17:44.915795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.033 [2024-12-14 03:17:44.915802] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.033 [2024-12-14 03:17:44.927937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.033 [2024-12-14 03:17:44.928353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.033 [2024-12-14 03:17:44.928370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.033 [2024-12-14 03:17:44.928377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.033 [2024-12-14 03:17:44.928545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.033 [2024-12-14 03:17:44.928713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.033 [2024-12-14 03:17:44.928721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.033 [2024-12-14 03:17:44.928728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.033 [2024-12-14 03:17:44.928734] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.033 [2024-12-14 03:17:44.940949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.033 [2024-12-14 03:17:44.941285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.033 [2024-12-14 03:17:44.941302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.033 [2024-12-14 03:17:44.941309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.033 [2024-12-14 03:17:44.941484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.033 [2024-12-14 03:17:44.941652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.033 [2024-12-14 03:17:44.941660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.033 [2024-12-14 03:17:44.941666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.033 [2024-12-14 03:17:44.941673] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.033 [2024-12-14 03:17:44.953730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.033 [2024-12-14 03:17:44.954143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.033 [2024-12-14 03:17:44.954159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.033 [2024-12-14 03:17:44.954169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.033 [2024-12-14 03:17:44.954334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.033 [2024-12-14 03:17:44.954517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.033 [2024-12-14 03:17:44.954525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.033 [2024-12-14 03:17:44.954531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.033 [2024-12-14 03:17:44.954538] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.033 [2024-12-14 03:17:44.966453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.033 [2024-12-14 03:17:44.966904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.033 [2024-12-14 03:17:44.966947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.033 [2024-12-14 03:17:44.966970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.033 [2024-12-14 03:17:44.967488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.033 [2024-12-14 03:17:44.967657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.033 [2024-12-14 03:17:44.967665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.033 [2024-12-14 03:17:44.967671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.033 [2024-12-14 03:17:44.967677] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.033 [2024-12-14 03:17:44.979325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.033 [2024-12-14 03:17:44.979768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.034 [2024-12-14 03:17:44.979813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.034 [2024-12-14 03:17:44.979836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.034 [2024-12-14 03:17:44.980437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.034 [2024-12-14 03:17:44.980873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.034 [2024-12-14 03:17:44.980881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.034 [2024-12-14 03:17:44.980887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.034 [2024-12-14 03:17:44.980893] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.034 [2024-12-14 03:17:44.992112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.034 [2024-12-14 03:17:44.992552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.034 [2024-12-14 03:17:44.992569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.034 [2024-12-14 03:17:44.992576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.034 [2024-12-14 03:17:44.992744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.034 [2024-12-14 03:17:44.992915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.034 [2024-12-14 03:17:44.992924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.034 [2024-12-14 03:17:44.992930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.034 [2024-12-14 03:17:44.992936] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.034 [2024-12-14 03:17:45.004980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.034 [2024-12-14 03:17:45.005404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.034 [2024-12-14 03:17:45.005451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.034 [2024-12-14 03:17:45.005475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.034 [2024-12-14 03:17:45.005910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.034 [2024-12-14 03:17:45.006081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.034 [2024-12-14 03:17:45.006090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.034 [2024-12-14 03:17:45.006098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.034 [2024-12-14 03:17:45.006106] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.034 [2024-12-14 03:17:45.017721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.034 [2024-12-14 03:17:45.018112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.034 [2024-12-14 03:17:45.018129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.034 [2024-12-14 03:17:45.018136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.034 [2024-12-14 03:17:45.018304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.034 [2024-12-14 03:17:45.018476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.034 [2024-12-14 03:17:45.018485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.034 [2024-12-14 03:17:45.018491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.034 [2024-12-14 03:17:45.018498] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.034 [2024-12-14 03:17:45.030633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.034 [2024-12-14 03:17:45.031018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.034 [2024-12-14 03:17:45.031063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.034 [2024-12-14 03:17:45.031087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.034 [2024-12-14 03:17:45.031624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.034 [2024-12-14 03:17:45.031793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.034 [2024-12-14 03:17:45.031801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.034 [2024-12-14 03:17:45.031811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.034 [2024-12-14 03:17:45.031818] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.034 [2024-12-14 03:17:45.043467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.034 [2024-12-14 03:17:45.043899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.034 [2024-12-14 03:17:45.043945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.034 [2024-12-14 03:17:45.043969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.034 [2024-12-14 03:17:45.044568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.034 [2024-12-14 03:17:45.045082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.034 [2024-12-14 03:17:45.045090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.034 [2024-12-14 03:17:45.045096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.034 [2024-12-14 03:17:45.045102] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.034 [2024-12-14 03:17:45.056242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.034 [2024-12-14 03:17:45.056667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.034 [2024-12-14 03:17:45.056713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.034 [2024-12-14 03:17:45.056736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.034 [2024-12-14 03:17:45.057178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.034 [2024-12-14 03:17:45.057354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.034 [2024-12-14 03:17:45.057362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.034 [2024-12-14 03:17:45.057369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.034 [2024-12-14 03:17:45.057375] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.034 [2024-12-14 03:17:45.069115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.034 [2024-12-14 03:17:45.069553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.034 [2024-12-14 03:17:45.069569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.034 [2024-12-14 03:17:45.069576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.034 [2024-12-14 03:17:45.069744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.034 [2024-12-14 03:17:45.069912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.034 [2024-12-14 03:17:45.069920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.034 [2024-12-14 03:17:45.069927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.034 [2024-12-14 03:17:45.069933] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.034 [2024-12-14 03:17:45.081987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.034 [2024-12-14 03:17:45.082394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.034 [2024-12-14 03:17:45.082416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.034 [2024-12-14 03:17:45.082424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.034 [2024-12-14 03:17:45.082591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.034 [2024-12-14 03:17:45.082759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.034 [2024-12-14 03:17:45.082768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.034 [2024-12-14 03:17:45.082774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.034 [2024-12-14 03:17:45.082780] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.034 [2024-12-14 03:17:45.094826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.034 [2024-12-14 03:17:45.095242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.034 [2024-12-14 03:17:45.095258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.034 [2024-12-14 03:17:45.095266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.034 [2024-12-14 03:17:45.095440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.034 [2024-12-14 03:17:45.095610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.034 [2024-12-14 03:17:45.095618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.034 [2024-12-14 03:17:45.095624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.034 [2024-12-14 03:17:45.095630] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.034 [2024-12-14 03:17:45.107662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.034 [2024-12-14 03:17:45.108069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.034 [2024-12-14 03:17:45.108086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.035 [2024-12-14 03:17:45.108093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.035 [2024-12-14 03:17:45.108266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.035 [2024-12-14 03:17:45.108451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.035 [2024-12-14 03:17:45.108460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.035 [2024-12-14 03:17:45.108466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.035 [2024-12-14 03:17:45.108473] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.035 [2024-12-14 03:17:45.120616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.035 [2024-12-14 03:17:45.121039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.035 [2024-12-14 03:17:45.121055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.035 [2024-12-14 03:17:45.121068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.035 [2024-12-14 03:17:45.121240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.035 [2024-12-14 03:17:45.121420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.035 [2024-12-14 03:17:45.121429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.035 [2024-12-14 03:17:45.121435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.035 [2024-12-14 03:17:45.121442] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.035 [2024-12-14 03:17:45.133613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.035 [2024-12-14 03:17:45.134024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.035 [2024-12-14 03:17:45.134041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.035 [2024-12-14 03:17:45.134048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.035 [2024-12-14 03:17:45.134216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.035 [2024-12-14 03:17:45.134389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.035 [2024-12-14 03:17:45.134398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.035 [2024-12-14 03:17:45.134404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.035 [2024-12-14 03:17:45.134410] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.035 [2024-12-14 03:17:45.146475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.035 [2024-12-14 03:17:45.146797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.035 [2024-12-14 03:17:45.146813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.035 [2024-12-14 03:17:45.146819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.035 [2024-12-14 03:17:45.146978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.035 [2024-12-14 03:17:45.147137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.035 [2024-12-14 03:17:45.147144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.035 [2024-12-14 03:17:45.147150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.035 [2024-12-14 03:17:45.147156] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.326 [2024-12-14 03:17:45.159440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.326 [2024-12-14 03:17:45.159892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.326 [2024-12-14 03:17:45.159910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.326 [2024-12-14 03:17:45.159917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.326 [2024-12-14 03:17:45.160090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.326 [2024-12-14 03:17:45.160266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.326 [2024-12-14 03:17:45.160274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.326 [2024-12-14 03:17:45.160280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.326 [2024-12-14 03:17:45.160287] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.326 [2024-12-14 03:17:45.172505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.326 [2024-12-14 03:17:45.172909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.326 [2024-12-14 03:17:45.172926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.326 [2024-12-14 03:17:45.172934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.326 [2024-12-14 03:17:45.173106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.326 [2024-12-14 03:17:45.173303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.326 [2024-12-14 03:17:45.173318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.326 [2024-12-14 03:17:45.173325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.326 [2024-12-14 03:17:45.173332] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.326 [2024-12-14 03:17:45.185496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.326 [2024-12-14 03:17:45.185919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.326 [2024-12-14 03:17:45.185937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.326 [2024-12-14 03:17:45.185944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.326 [2024-12-14 03:17:45.186117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.326 [2024-12-14 03:17:45.186289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.326 [2024-12-14 03:17:45.186298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.326 [2024-12-14 03:17:45.186304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.326 [2024-12-14 03:17:45.186311] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.326 [2024-12-14 03:17:45.198386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.326 [2024-12-14 03:17:45.198796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.326 [2024-12-14 03:17:45.198841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.326 [2024-12-14 03:17:45.198864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.326 [2024-12-14 03:17:45.199354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.326 [2024-12-14 03:17:45.199524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.326 [2024-12-14 03:17:45.199532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.326 [2024-12-14 03:17:45.199542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.326 [2024-12-14 03:17:45.199549] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.326 [2024-12-14 03:17:45.211123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.326 [2024-12-14 03:17:45.211535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.326 [2024-12-14 03:17:45.211553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.326 [2024-12-14 03:17:45.211560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.326 [2024-12-14 03:17:45.211728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.326 [2024-12-14 03:17:45.211896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.326 [2024-12-14 03:17:45.211904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.326 [2024-12-14 03:17:45.211910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.326 [2024-12-14 03:17:45.211916] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.326 [2024-12-14 03:17:45.223926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.326 [2024-12-14 03:17:45.224360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.326 [2024-12-14 03:17:45.224377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.326 [2024-12-14 03:17:45.224384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.326 [2024-12-14 03:17:45.224558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.326 [2024-12-14 03:17:45.224717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.326 [2024-12-14 03:17:45.224725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.326 [2024-12-14 03:17:45.224730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.326 [2024-12-14 03:17:45.224736] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.326 [2024-12-14 03:17:45.236705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.326 [2024-12-14 03:17:45.237151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.326 [2024-12-14 03:17:45.237195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.326 [2024-12-14 03:17:45.237219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.326 [2024-12-14 03:17:45.237620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.326 [2024-12-14 03:17:45.237790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.326 [2024-12-14 03:17:45.237799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.326 [2024-12-14 03:17:45.237805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.326 [2024-12-14 03:17:45.237812] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.326 [2024-12-14 03:17:45.249492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.326 [2024-12-14 03:17:45.249881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.326 [2024-12-14 03:17:45.249896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.326 [2024-12-14 03:17:45.249904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.326 [2024-12-14 03:17:45.250062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.326 [2024-12-14 03:17:45.250221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.326 [2024-12-14 03:17:45.250229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.326 [2024-12-14 03:17:45.250236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.326 [2024-12-14 03:17:45.250242] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.326 [2024-12-14 03:17:45.262305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.326 [2024-12-14 03:17:45.262752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.326 [2024-12-14 03:17:45.262768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.327 [2024-12-14 03:17:45.262775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.327 [2024-12-14 03:17:45.262943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.327 [2024-12-14 03:17:45.263111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.327 [2024-12-14 03:17:45.263119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.327 [2024-12-14 03:17:45.263125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.327 [2024-12-14 03:17:45.263131] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 388665 Killed "${NVMF_APP[@]}" "$@" 00:36:30.327 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:36:30.327 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:30.327 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:30.327 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:30.327 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:30.327 [2024-12-14 03:17:45.275328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.327 [2024-12-14 03:17:45.275677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.327 [2024-12-14 03:17:45.275694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.327 [2024-12-14 03:17:45.275702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.327 [2024-12-14 03:17:45.275875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.327 [2024-12-14 03:17:45.276051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.327 [2024-12-14 03:17:45.276060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.327 [2024-12-14 03:17:45.276071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.327 [2024-12-14 03:17:45.276079] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.327 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=388804 00:36:30.327 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:30.327 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 388804 00:36:30.327 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 388804 ']' 00:36:30.327 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:30.327 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:30.327 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:30.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:30.327 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:30.327 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:30.327 [2024-12-14 03:17:45.288303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.327 [2024-12-14 03:17:45.288713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.327 [2024-12-14 03:17:45.288729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.327 [2024-12-14 03:17:45.288737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.327 [2024-12-14 03:17:45.288910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.327 [2024-12-14 03:17:45.289083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.327 [2024-12-14 03:17:45.289092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.327 [2024-12-14 03:17:45.289099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.327 [2024-12-14 03:17:45.289105] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.327 [2024-12-14 03:17:45.301343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.327 [2024-12-14 03:17:45.301747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.327 [2024-12-14 03:17:45.301764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.327 [2024-12-14 03:17:45.301772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.327 [2024-12-14 03:17:45.301945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.327 [2024-12-14 03:17:45.302120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.327 [2024-12-14 03:17:45.302129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.327 [2024-12-14 03:17:45.302135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.327 [2024-12-14 03:17:45.302142] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.327 [2024-12-14 03:17:45.314373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.327 [2024-12-14 03:17:45.314721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.327 [2024-12-14 03:17:45.314740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.327 [2024-12-14 03:17:45.314748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.327 [2024-12-14 03:17:45.314915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.327 [2024-12-14 03:17:45.315084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.327 [2024-12-14 03:17:45.315093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.327 [2024-12-14 03:17:45.315099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.327 [2024-12-14 03:17:45.315106] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.327 [2024-12-14 03:17:45.327301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.327 [2024-12-14 03:17:45.327640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.327 [2024-12-14 03:17:45.327657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.327 [2024-12-14 03:17:45.327664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.327 [2024-12-14 03:17:45.327833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.327 [2024-12-14 03:17:45.328001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.327 [2024-12-14 03:17:45.328009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.327 [2024-12-14 03:17:45.328015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.327 [2024-12-14 03:17:45.328021] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.327 [2024-12-14 03:17:45.328842] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:30.327 [2024-12-14 03:17:45.328881] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:30.327 [2024-12-14 03:17:45.340431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.327 [2024-12-14 03:17:45.340861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.327 [2024-12-14 03:17:45.340878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.327 [2024-12-14 03:17:45.340886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.327 [2024-12-14 03:17:45.341054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.327 [2024-12-14 03:17:45.341223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.327 [2024-12-14 03:17:45.341231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.327 [2024-12-14 03:17:45.341238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.327 [2024-12-14 03:17:45.341244] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.327 [2024-12-14 03:17:45.353427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.327 [2024-12-14 03:17:45.353855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.327 [2024-12-14 03:17:45.353871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.327 [2024-12-14 03:17:45.353878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.327 [2024-12-14 03:17:45.354047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.327 [2024-12-14 03:17:45.354215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.327 [2024-12-14 03:17:45.354224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.327 [2024-12-14 03:17:45.354230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.327 [2024-12-14 03:17:45.354237] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.327 [2024-12-14 03:17:45.366392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.327 [2024-12-14 03:17:45.366835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.327 [2024-12-14 03:17:45.366851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.327 [2024-12-14 03:17:45.366859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.327 [2024-12-14 03:17:45.367032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.327 [2024-12-14 03:17:45.367206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.327 [2024-12-14 03:17:45.367214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.328 [2024-12-14 03:17:45.367221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.328 [2024-12-14 03:17:45.367228] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.328 [2024-12-14 03:17:45.379450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.328 [2024-12-14 03:17:45.379801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.328 [2024-12-14 03:17:45.379819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.328 [2024-12-14 03:17:45.379826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.328 [2024-12-14 03:17:45.380000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.328 [2024-12-14 03:17:45.380174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.328 [2024-12-14 03:17:45.380182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.328 [2024-12-14 03:17:45.380191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.328 [2024-12-14 03:17:45.380197] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.328 [2024-12-14 03:17:45.392358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.328 [2024-12-14 03:17:45.392788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.328 [2024-12-14 03:17:45.392805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.328 [2024-12-14 03:17:45.392812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.328 [2024-12-14 03:17:45.392984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.328 [2024-12-14 03:17:45.393153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.328 [2024-12-14 03:17:45.393162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.328 [2024-12-14 03:17:45.393168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.328 [2024-12-14 03:17:45.393175] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.328 [2024-12-14 03:17:45.405345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.328 [2024-12-14 03:17:45.405771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.328 [2024-12-14 03:17:45.405789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.328 [2024-12-14 03:17:45.405796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.328 [2024-12-14 03:17:45.405964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.328 [2024-12-14 03:17:45.406133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.328 [2024-12-14 03:17:45.406142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.328 [2024-12-14 03:17:45.406148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.328 [2024-12-14 03:17:45.406155] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.328 [2024-12-14 03:17:45.408039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:30.328 [2024-12-14 03:17:45.418354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.328 [2024-12-14 03:17:45.418793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.328 [2024-12-14 03:17:45.418813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.328 [2024-12-14 03:17:45.418821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.328 [2024-12-14 03:17:45.418990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.328 [2024-12-14 03:17:45.419160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.328 [2024-12-14 03:17:45.419168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.328 [2024-12-14 03:17:45.419176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.328 [2024-12-14 03:17:45.419182] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.328 [2024-12-14 03:17:45.429358] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:30.328 [2024-12-14 03:17:45.429385] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:30.328 [2024-12-14 03:17:45.429392] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:30.328 [2024-12-14 03:17:45.429398] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:30.328 [2024-12-14 03:17:45.429402] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:30.328 [2024-12-14 03:17:45.430637] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:30.328 [2024-12-14 03:17:45.430748] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:30.328 [2024-12-14 03:17:45.430749] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:30.328 [2024-12-14 03:17:45.431433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.328 [2024-12-14 03:17:45.431884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.328 [2024-12-14 03:17:45.431903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.328 [2024-12-14 03:17:45.431911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.328 [2024-12-14 03:17:45.432086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.328 [2024-12-14 03:17:45.432261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.328 [2024-12-14 03:17:45.432270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.328 [2024-12-14 03:17:45.432277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.328 [2024-12-14 03:17:45.432286] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.328 [2024-12-14 03:17:45.444583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.328 [2024-12-14 03:17:45.445061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.328 [2024-12-14 03:17:45.445080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.328 [2024-12-14 03:17:45.445089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.328 [2024-12-14 03:17:45.445265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.328 [2024-12-14 03:17:45.445444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.328 [2024-12-14 03:17:45.445455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.328 [2024-12-14 03:17:45.445462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.328 [2024-12-14 03:17:45.445469] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.603 [2024-12-14 03:17:45.457689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.603 [2024-12-14 03:17:45.458118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.603 [2024-12-14 03:17:45.458139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.603 [2024-12-14 03:17:45.458148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.603 [2024-12-14 03:17:45.458329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.603 [2024-12-14 03:17:45.458505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.603 [2024-12-14 03:17:45.458513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.603 [2024-12-14 03:17:45.458520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.603 [2024-12-14 03:17:45.458528] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.603 [2024-12-14 03:17:45.470740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.603 [2024-12-14 03:17:45.471165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.603 [2024-12-14 03:17:45.471184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.603 [2024-12-14 03:17:45.471192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.603 [2024-12-14 03:17:45.471373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.603 [2024-12-14 03:17:45.471549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.603 [2024-12-14 03:17:45.471557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.603 [2024-12-14 03:17:45.471564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.603 [2024-12-14 03:17:45.471571] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.603 [2024-12-14 03:17:45.483789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.603 [2024-12-14 03:17:45.484215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.603 [2024-12-14 03:17:45.484234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.603 [2024-12-14 03:17:45.484243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.603 [2024-12-14 03:17:45.484424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.603 [2024-12-14 03:17:45.484600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.603 [2024-12-14 03:17:45.484608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.603 [2024-12-14 03:17:45.484616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.603 [2024-12-14 03:17:45.484623] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.603 [2024-12-14 03:17:45.496843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.603 [2024-12-14 03:17:45.497263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.603 [2024-12-14 03:17:45.497281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.603 [2024-12-14 03:17:45.497289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.603 [2024-12-14 03:17:45.497470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.603 [2024-12-14 03:17:45.497645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.603 [2024-12-14 03:17:45.497654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.603 [2024-12-14 03:17:45.497661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.603 [2024-12-14 03:17:45.497668] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.603 [2024-12-14 03:17:45.509867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.603 [2024-12-14 03:17:45.510271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.603 [2024-12-14 03:17:45.510288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.603 [2024-12-14 03:17:45.510300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.603 [2024-12-14 03:17:45.510477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.604 [2024-12-14 03:17:45.510652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.604 [2024-12-14 03:17:45.510659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.604 [2024-12-14 03:17:45.510666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.604 [2024-12-14 03:17:45.510672] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.604 [2024-12-14 03:17:45.522883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.604 [2024-12-14 03:17:45.523290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.604 [2024-12-14 03:17:45.523307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.604 [2024-12-14 03:17:45.523321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.604 [2024-12-14 03:17:45.523494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.604 [2024-12-14 03:17:45.523667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.604 [2024-12-14 03:17:45.523674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.604 [2024-12-14 03:17:45.523681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.604 [2024-12-14 03:17:45.523687] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.604 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:30.604 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:36:30.604 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:30.604 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:30.604 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:30.604 [2024-12-14 03:17:45.535899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.604 [2024-12-14 03:17:45.536356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.604 [2024-12-14 03:17:45.536374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.604 [2024-12-14 03:17:45.536382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.604 [2024-12-14 03:17:45.536555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.604 [2024-12-14 03:17:45.536728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.604 [2024-12-14 03:17:45.536736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.604 [2024-12-14 03:17:45.536743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.604 [2024-12-14 03:17:45.536749] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.604 [2024-12-14 03:17:45.548962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.604 [2024-12-14 03:17:45.549414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.604 [2024-12-14 03:17:45.549435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.604 [2024-12-14 03:17:45.549443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.604 [2024-12-14 03:17:45.549616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.604 [2024-12-14 03:17:45.549789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.604 [2024-12-14 03:17:45.549798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.604 [2024-12-14 03:17:45.549804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.604 [2024-12-14 03:17:45.549811] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.604 [2024-12-14 03:17:45.562025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.604 [2024-12-14 03:17:45.562425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.604 [2024-12-14 03:17:45.562442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.604 [2024-12-14 03:17:45.562449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.604 [2024-12-14 03:17:45.562622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.604 [2024-12-14 03:17:45.562795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.604 [2024-12-14 03:17:45.562814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.604 [2024-12-14 03:17:45.562821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.604 [2024-12-14 03:17:45.562827] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.604 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:30.604 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:30.604 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.604 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:30.604 [2024-12-14 03:17:45.574398] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:30.604 [2024-12-14 03:17:45.575020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.604 [2024-12-14 03:17:45.575408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.604 [2024-12-14 03:17:45.575426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.604 [2024-12-14 03:17:45.575434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.604 [2024-12-14 03:17:45.575607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.604 [2024-12-14 03:17:45.575779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.604 [2024-12-14 03:17:45.575787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.604 [2024-12-14 03:17:45.575794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.604 [2024-12-14 03:17:45.575800] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.604 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.604 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:30.604 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.604 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:30.604 [2024-12-14 03:17:45.588194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.604 [2024-12-14 03:17:45.588546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.604 [2024-12-14 03:17:45.588564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.604 [2024-12-14 03:17:45.588572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.604 [2024-12-14 03:17:45.588746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.604 [2024-12-14 03:17:45.588920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.604 [2024-12-14 03:17:45.588928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.604 [2024-12-14 03:17:45.588934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.604 [2024-12-14 03:17:45.588941] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.604 [2024-12-14 03:17:45.601158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.604 [2024-12-14 03:17:45.601571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.604 [2024-12-14 03:17:45.601588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.604 [2024-12-14 03:17:45.601596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.604 [2024-12-14 03:17:45.601769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.604 [2024-12-14 03:17:45.601942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.604 [2024-12-14 03:17:45.601950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.604 [2024-12-14 03:17:45.601957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.604 [2024-12-14 03:17:45.601964] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.604 Malloc0 00:36:30.604 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.604 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:30.604 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.604 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:30.604 [2024-12-14 03:17:45.614177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.604 [2024-12-14 03:17:45.614604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.604 [2024-12-14 03:17:45.614621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.604 [2024-12-14 03:17:45.614629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.604 [2024-12-14 03:17:45.614802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.604 [2024-12-14 03:17:45.614975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.604 [2024-12-14 03:17:45.614987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.604 [2024-12-14 03:17:45.614994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.604 [2024-12-14 03:17:45.615000] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.605 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.605 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:30.605 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.605 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:30.605 [2024-12-14 03:17:45.627196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.605 [2024-12-14 03:17:45.627632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.605 [2024-12-14 03:17:45.627650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c490 with addr=10.0.0.2, port=4420 00:36:30.605 [2024-12-14 03:17:45.627658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c490 is same with the state(6) to be set 00:36:30.605 [2024-12-14 03:17:45.627831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c490 (9): Bad file descriptor 00:36:30.605 [2024-12-14 03:17:45.628004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.605 [2024-12-14 03:17:45.628012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.605 [2024-12-14 03:17:45.628019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.605 [2024-12-14 03:17:45.628025] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.605 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.605 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:30.605 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.605 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:30.605 [2024-12-14 03:17:45.639543] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:30.605 [2024-12-14 03:17:45.640224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.605 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.605 03:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 388718 00:36:30.605 [2024-12-14 03:17:45.662452] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:36:31.619 4917.50 IOPS, 19.21 MiB/s [2024-12-14T02:17:47.768Z] 5858.00 IOPS, 22.88 MiB/s [2024-12-14T02:17:48.742Z] 6579.00 IOPS, 25.70 MiB/s [2024-12-14T02:17:49.718Z] 7093.67 IOPS, 27.71 MiB/s [2024-12-14T02:17:51.096Z] 7522.30 IOPS, 29.38 MiB/s [2024-12-14T02:17:52.033Z] 7888.91 IOPS, 30.82 MiB/s [2024-12-14T02:17:52.969Z] 8193.83 IOPS, 32.01 MiB/s [2024-12-14T02:17:53.906Z] 8453.23 IOPS, 33.02 MiB/s [2024-12-14T02:17:54.843Z] 8648.21 IOPS, 33.78 MiB/s [2024-12-14T02:17:54.843Z] 8832.33 IOPS, 34.50 MiB/s 00:36:39.710 Latency(us) 00:36:39.710 [2024-12-14T02:17:54.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.710 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:39.710 Verification LBA range: start 0x0 length 0x4000 00:36:39.710 Nvme1n1 : 15.05 8806.09 34.40 11038.52 0.00 6413.31 631.95 42442.36 00:36:39.710 [2024-12-14T02:17:54.843Z] =================================================================================================================== 00:36:39.710 [2024-12-14T02:17:54.843Z] Total : 8806.09 34.40 11038.52 0.00 6413.31 631.95 42442.36 00:36:39.970 03:17:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:36:39.970 03:17:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:39.970 03:17:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.970 03:17:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:39.970 03:17:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.970 03:17:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:36:39.970 03:17:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:36:39.970 03:17:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:39.970 03:17:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:36:39.970 03:17:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:39.970 03:17:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:36:39.970 03:17:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:39.970 03:17:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:39.970 rmmod nvme_tcp 00:36:39.970 rmmod nvme_fabrics 00:36:39.970 rmmod nvme_keyring 00:36:39.970 03:17:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:39.970 03:17:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:36:39.970 03:17:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:36:39.970 03:17:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 388804 ']' 00:36:39.970 03:17:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 388804 00:36:39.970 03:17:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 388804 ']' 00:36:39.970 03:17:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 388804 00:36:39.970 03:17:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:36:39.970 03:17:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:39.970 03:17:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 388804 00:36:39.970 03:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:39.970 03:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:39.970 03:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 388804' 00:36:39.970 killing process with pid 388804 00:36:39.970 03:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 388804 00:36:39.970 03:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 388804 00:36:40.230 03:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:40.230 03:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:40.230 03:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:40.230 03:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:36:40.230 03:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:36:40.230 03:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:40.230 03:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:36:40.230 03:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:40.230 03:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:40.230 03:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:40.230 03:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:40.230 03:17:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:42.768 00:36:42.768 real 0m25.918s 00:36:42.768 user 1m0.517s 00:36:42.768 sys 0m6.666s 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:42.768 ************************************ 00:36:42.768 END TEST nvmf_bdevperf 00:36:42.768 ************************************ 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.768 ************************************ 00:36:42.768 START TEST nvmf_target_disconnect 00:36:42.768 ************************************ 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:42.768 * Looking for test storage... 00:36:42.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:36:42.768 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:42.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.769 --rc genhtml_branch_coverage=1 00:36:42.769 --rc genhtml_function_coverage=1 00:36:42.769 --rc genhtml_legend=1 00:36:42.769 --rc geninfo_all_blocks=1 00:36:42.769 --rc geninfo_unexecuted_blocks=1 00:36:42.769 00:36:42.769 ' 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:42.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.769 --rc genhtml_branch_coverage=1 00:36:42.769 --rc genhtml_function_coverage=1 00:36:42.769 --rc genhtml_legend=1 00:36:42.769 --rc geninfo_all_blocks=1 00:36:42.769 --rc geninfo_unexecuted_blocks=1 00:36:42.769 00:36:42.769 ' 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:42.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.769 --rc genhtml_branch_coverage=1 00:36:42.769 --rc genhtml_function_coverage=1 00:36:42.769 --rc genhtml_legend=1 00:36:42.769 --rc geninfo_all_blocks=1 00:36:42.769 --rc geninfo_unexecuted_blocks=1 00:36:42.769 00:36:42.769 ' 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:42.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.769 --rc genhtml_branch_coverage=1 00:36:42.769 --rc genhtml_function_coverage=1 00:36:42.769 --rc genhtml_legend=1 00:36:42.769 --rc geninfo_all_blocks=1 00:36:42.769 --rc geninfo_unexecuted_blocks=1 00:36:42.769 00:36:42.769 ' 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:42.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:36:42.769 03:17:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:48.047 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:48.047 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:48.047 Found net devices under 0000:af:00.0: cvl_0_0 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:48.047 Found net devices under 0000:af:00.1: cvl_0_1 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:48.047 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:48.307 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:48.307 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:48.307 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:48.307 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:48.307 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:48.307 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:48.307 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:48.307 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:48.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:48.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:36:48.307 00:36:48.307 --- 10.0.0.2 ping statistics --- 00:36:48.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:48.307 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:36:48.307 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:48.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:48.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:36:48.307 00:36:48.307 --- 10.0.0.1 ping statistics --- 00:36:48.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:48.307 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:36:48.307 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:48.307 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:36:48.307 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:48.307 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:48.307 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:48.307 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:48.307 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:48.307 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:48.307 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:48.307 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:48.307 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:48.307 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:48.307 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:48.566 ************************************ 00:36:48.566 START TEST nvmf_target_disconnect_tc1 00:36:48.566 ************************************ 00:36:48.566 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:36:48.566 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:48.566 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:36:48.566 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:48.566 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:48.566 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:48.566 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:48.566 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:48.566 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:48.566 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:48.567 [2024-12-14 03:18:03.564068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:48.567 [2024-12-14 03:18:03.564108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e1bc50 with addr=10.0.0.2, port=4420 00:36:48.567 [2024-12-14 03:18:03.564132] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:48.567 [2024-12-14 03:18:03.564144] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:48.567 [2024-12-14 03:18:03.564150] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:36:48.567 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:48.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:48.567 Initializing NVMe Controllers 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:48.567 00:36:48.567 real 0m0.099s 00:36:48.567 user 0m0.042s 00:36:48.567 sys 0m0.057s 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:48.567 ************************************ 00:36:48.567 END TEST nvmf_target_disconnect_tc1 00:36:48.567 ************************************ 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:48.567 ************************************ 00:36:48.567 START TEST nvmf_target_disconnect_tc2 00:36:48.567 ************************************ 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=391343 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 391343 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 391343 ']' 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:48.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:48.567 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:48.826 [2024-12-14 03:18:03.701138] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:48.826 [2024-12-14 03:18:03.701179] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:48.826 [2024-12-14 03:18:03.779690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:48.826 [2024-12-14 03:18:03.802268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:48.826 [2024-12-14 03:18:03.802304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:48.826 [2024-12-14 03:18:03.802315] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:48.826 [2024-12-14 03:18:03.802321] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:48.826 [2024-12-14 03:18:03.802326] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:48.826 [2024-12-14 03:18:03.803830] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:36:48.826 [2024-12-14 03:18:03.803936] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:36:48.826 [2024-12-14 03:18:03.803955] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:36:48.826 [2024-12-14 03:18:03.803956] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:36:48.826 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:48.826 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:48.826 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:48.826 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:48.826 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:48.826 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:48.826 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:48.826 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.826 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:49.084 Malloc0 00:36:49.084 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.084 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:49.084 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.084 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:49.084 [2024-12-14 03:18:03.968731] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:49.084 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.084 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:49.084 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.084 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:49.084 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.084 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:49.084 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.084 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:49.084 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.084 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:49.084 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.084 03:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:49.084 [2024-12-14 03:18:03.997781] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:49.084 03:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.084 03:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:49.084 03:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.084 03:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:49.084 03:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.084 03:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=391365 00:36:49.084 03:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:49.084 03:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:50.995 03:18:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 391343 00:36:50.995 03:18:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 [2024-12-14 03:18:06.029248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 [2024-12-14 03:18:06.029443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Write completed with error (sct=0, sc=8) 00:36:50.995 starting I/O failed 00:36:50.995 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Write completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Write completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Write completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Write completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 [2024-12-14 03:18:06.029637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Write completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Write completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Write completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Write completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Write completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Write completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Write completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 Read completed with error (sct=0, sc=8) 00:36:50.996 starting I/O failed 00:36:50.996 [2024-12-14 03:18:06.029825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:50.996 [2024-12-14 03:18:06.029959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.996 [2024-12-14 03:18:06.030037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:50.996 qpair failed and we were unable to recover it. 00:36:50.996 [2024-12-14 03:18:06.030363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.996 [2024-12-14 03:18:06.030411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:50.996 qpair failed and we were unable to recover it. 00:36:50.996 [2024-12-14 03:18:06.030635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.996 [2024-12-14 03:18:06.030676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:50.996 qpair failed and we were unable to recover it. 00:36:50.996 [2024-12-14 03:18:06.030898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.996 [2024-12-14 03:18:06.030937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:50.996 qpair failed and we were unable to recover it. 00:36:50.996 [2024-12-14 03:18:06.031082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.996 [2024-12-14 03:18:06.031094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:50.996 qpair failed and we were unable to recover it. 00:36:50.996 [2024-12-14 03:18:06.031343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.996 [2024-12-14 03:18:06.031384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:50.996 qpair failed and we were unable to recover it. 00:36:50.996 [2024-12-14 03:18:06.031667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.996 [2024-12-14 03:18:06.031707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:50.996 qpair failed and we were unable to recover it. 00:36:50.996 [2024-12-14 03:18:06.031852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.996 [2024-12-14 03:18:06.031897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:50.996 qpair failed and we were unable to recover it. 00:36:50.996 [2024-12-14 03:18:06.031995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.996 [2024-12-14 03:18:06.032006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:50.996 qpair failed and we were unable to recover it. 00:36:50.996 [2024-12-14 03:18:06.032212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.996 [2024-12-14 03:18:06.032224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:50.996 qpair failed and we were unable to recover it. 00:36:50.996 [2024-12-14 03:18:06.032304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.996 [2024-12-14 03:18:06.032321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:50.996 qpair failed and we were unable to recover it. 00:36:50.996 [2024-12-14 03:18:06.032479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.996 [2024-12-14 03:18:06.032519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:50.996 qpair failed and we were unable to recover it. 00:36:50.996 [2024-12-14 03:18:06.032704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.996 [2024-12-14 03:18:06.032760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:50.996 qpair failed and we were unable to recover it. 00:36:50.996 [2024-12-14 03:18:06.032894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.996 [2024-12-14 03:18:06.032925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:50.996 qpair failed and we were unable to recover it. 00:36:50.996 [2024-12-14 03:18:06.033060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.996 [2024-12-14 03:18:06.033093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:50.996 qpair failed and we were unable to recover it. 00:36:50.996 [2024-12-14 03:18:06.033208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.996 [2024-12-14 03:18:06.033218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:50.996 qpair failed and we were unable to recover it. 00:36:50.996 [2024-12-14 03:18:06.033318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.996 [2024-12-14 03:18:06.033328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:50.996 qpair failed and we were unable to recover it. 00:36:50.996 [2024-12-14 03:18:06.033483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.996 [2024-12-14 03:18:06.033514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:50.996 qpair failed and we were unable to recover it. 00:36:50.996 [2024-12-14 03:18:06.033766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.996 [2024-12-14 03:18:06.033797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:50.996 qpair failed and we were unable to recover it. 00:36:50.996 [2024-12-14 03:18:06.033991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.996 [2024-12-14 03:18:06.034022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:50.996 qpair failed and we were unable to recover it. 00:36:50.996 [2024-12-14 03:18:06.034141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.996 [2024-12-14 03:18:06.034171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:50.996 qpair failed and we were unable to recover it. 00:36:50.996 [2024-12-14 03:18:06.034356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.996 [2024-12-14 03:18:06.034388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.034511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.034541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.034724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.034754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.034867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.034897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.035084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.035116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.035343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.035375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.035494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.035525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.035638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.035669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.035876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.035907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.036019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.036049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.036185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.036219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.036329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.036355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.036536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.036557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.036648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.036670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.036787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.036808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.036985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.037006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.037108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.037129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.037212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.037232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.037337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.037361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.037443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.037469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.037563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.037583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.037678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.037698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.037855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.037876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.038026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.038047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.038129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.038150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.038230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.038250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.038337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.038359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.038514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.038535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.038633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.038653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.038750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.038771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.038877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.038898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.039041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.039062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.039209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.039230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.039383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.039405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.039555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.997 [2024-12-14 03:18:06.039576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.997 qpair failed and we were unable to recover it. 00:36:50.997 [2024-12-14 03:18:06.039736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.039767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.039951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.039983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.040103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.040142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.040245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.040276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.040395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.040428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.040551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.040581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.040752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.040783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.041041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.041072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.041192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.041222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.041340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.041373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.041562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.041593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.041765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.041796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.041980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.042011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.042263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.042293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.042489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.042522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.042642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.042673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.042843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.042874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.043107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.043128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.043303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.043333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.043510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.043531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.043680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.043702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.043814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.043835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.044064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.044085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.044166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.044185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.044282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.044303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.044404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.044428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.044526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.044548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.044632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.044652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.044756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.044777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.044881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.044902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.045005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.045026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.045115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.045134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.045301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.045339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.045445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.045465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.045637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.998 [2024-12-14 03:18:06.045658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.998 qpair failed and we were unable to recover it. 00:36:50.998 [2024-12-14 03:18:06.045821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.045841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.045929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.045950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.046162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.046182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.046292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.046321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.046477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.046516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.046622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.046652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.046757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.046789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.046891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.046921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.047070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.047091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.047241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.047262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.047430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.047462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.047566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.047596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.047695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.047726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.047835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.047865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.047977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.048009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.048253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.048274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.048373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.048395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.048568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.048666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.048863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.048898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.049154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.049176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.049261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.049280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.049501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.049522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.049607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.049627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.049868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.049889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.050107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.050128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.050231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.050252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.050368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.050389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.050489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.050510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.050590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.050611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.050705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.050725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.050815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.050835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.051056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.051087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.051217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.051247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.051436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.051467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.051651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.999 [2024-12-14 03:18:06.051681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:50.999 qpair failed and we were unable to recover it. 00:36:50.999 [2024-12-14 03:18:06.051845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.051865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.051973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.051995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.052098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.052120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.052357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.052389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.052584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.052614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.052797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.052827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.053060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.053082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.053317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.053338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.053508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.053539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.053800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.053837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.054072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.054102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.054207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.054237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.054496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.054528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.054707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.054738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.054908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.054939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.055177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.055209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.055396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.055429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.055617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.055648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.055832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.055853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.055972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.055994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.056228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.056248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.056402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.056423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.056512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.056533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.056683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.056705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.056784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.056804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.056952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.056973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.057122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.057143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.057244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.057263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.057516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.057538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.000 qpair failed and we were unable to recover it. 00:36:51.000 [2024-12-14 03:18:06.057628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.000 [2024-12-14 03:18:06.057649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.057795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.057816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.057969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.057990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.058082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.058103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.058264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.058284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.058508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.058529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.058748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.058769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.058879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.058899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.059000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.059022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.059173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.059195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.059283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.059303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.059545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.059576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.059834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.059866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.059984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.060015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.060251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.060272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.060437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.060460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.060563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.060584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.060748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.060769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.060852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.060872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.061048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.061069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.061291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.061319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.061519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.061554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.061673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.061703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.061889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.061920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.062090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.062114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.062231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.062252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.062441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.062463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.062653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.062675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.062825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.062845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.062938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.062959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.063112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.063133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.063304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.063333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.063521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.063565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.063755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.063785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.063987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.001 [2024-12-14 03:18:06.064017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.001 qpair failed and we were unable to recover it. 00:36:51.001 [2024-12-14 03:18:06.064135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.064156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.064308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.064350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.064451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.064472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.064633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.064656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.064818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.064839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.065021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.065052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.065171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.065202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.065395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.065427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.065552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.065583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.065689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.065720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.065837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.065858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.066035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.066056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.066170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.066191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.066423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.066456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.066662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.066693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.066874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.066905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.067115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.067146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.067262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.067293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.067411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.067443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.067645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.067669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.067758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.067778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.067947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.067967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.068146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.068177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.068296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.068336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.068522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.068554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.068746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.068777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.068897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.068927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.069070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.069101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.069212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.069243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.069428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.069460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.069648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.069680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.069920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.069941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.070100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.070121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.070383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.070415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.070584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.002 [2024-12-14 03:18:06.070615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.002 qpair failed and we were unable to recover it. 00:36:51.002 [2024-12-14 03:18:06.070852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.070882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.071068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.071099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.071372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.071393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.071495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.071517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.071707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.071728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.071944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.071971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.072063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.072082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.072294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.072337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.072536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.072558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.072731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.072752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.072851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.072873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.073047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.073068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.073233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.073255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.073434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.073466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.073676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.073707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.073948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.073979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.074153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.074174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.074330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.074352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.074499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.074520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.074749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.074771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.074870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.074891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.075043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.075064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.075168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.075189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.075285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.075307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.075412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.075432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.075643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.075665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.075837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.075858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.076012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.076033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.076200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.076237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.076366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.076397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.076565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.076596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.076785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.076815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.077003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.077027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.077193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.077214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.077363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.003 [2024-12-14 03:18:06.077384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.003 qpair failed and we were unable to recover it. 00:36:51.003 [2024-12-14 03:18:06.077482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.077503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.077606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.077627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.077845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.077866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.078096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.078118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.078269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.078290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.078535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.078557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.078715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.078753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.078873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.078904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.079099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.079129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.079410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.079443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.079630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.079661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.079835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.079866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.080043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.080073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.080269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.080299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.080577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.080609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.080870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.080901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.081077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.081108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.081300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.081327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.081431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.081452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.081608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.081630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.081789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.081810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.081976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.081996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.082230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.082269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.082401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.082432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.082636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.082678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.082851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.082882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.083010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.083030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.083137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.083158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.083244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.083263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.083428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.083449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.083612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.083642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.083814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.083844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.083965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.083996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.084235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.084256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.084340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.004 [2024-12-14 03:18:06.084360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.004 qpair failed and we were unable to recover it. 00:36:51.004 [2024-12-14 03:18:06.084462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.084483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.084572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.084593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.084701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.084721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.084893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.084924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.085104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.085134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.085247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.085277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.085414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.085446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.085623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.085653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.085833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.085864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.085964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.085984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.086135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.086156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.086323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.086344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.086511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.086543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.086781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.086812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.087002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.087044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.087143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.087164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.087241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.087261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.087349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.087370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.087646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.087677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.087939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.087971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.088170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.088207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.088311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.088350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.088510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.088531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.088614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.088634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.088807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.088828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.089068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.089089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.089239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.089260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.089364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.089386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.089632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.089653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.089801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.005 [2024-12-14 03:18:06.089821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.005 qpair failed and we were unable to recover it. 00:36:51.005 [2024-12-14 03:18:06.089926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.089947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.090042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.090062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.090159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.090179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.090353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.090375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.090495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.090518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.090613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.090634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.090715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.090736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.090820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.090841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.091090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.091111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.091214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.091235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.091331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.091353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.091517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.091538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.091648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.091669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.091757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.091778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.091932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.091954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.092050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.092071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.092239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.092270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.092539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.092571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.092673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.092704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.092828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.092859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.093044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.093074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.093267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.093309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.093551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.093572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.093816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.093837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.094007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.094028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.094226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.094258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.094491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.094523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.094751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.094788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.094899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.094920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.095085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.095106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.095294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.095336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.095504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.095535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.095672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.095704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.095962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.095993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.096179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.006 [2024-12-14 03:18:06.096211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.006 qpair failed and we were unable to recover it. 00:36:51.006 [2024-12-14 03:18:06.096347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.096370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.096482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.096504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.096649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.096671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.096837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.096858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.097052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.097073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.097153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.097173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.097262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.097282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.097439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.097461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.097547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.097567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.097714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.097735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.097894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.097915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.098014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.098034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.098127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.098147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.098265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.098286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.098383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.098405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.098509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.098529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.098678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.098698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.098848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.098870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.098983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.099004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.099108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.099132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.099238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.099259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.099359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.099380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.099559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.099580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.099729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.099750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.099898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.099919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.100018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.100039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.100187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.100208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.100360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.100382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.100564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.100585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.100733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.100754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.100924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.100945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.101096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.101117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.101226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.101246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.101400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.101422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.101639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.007 [2024-12-14 03:18:06.101660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.007 qpair failed and we were unable to recover it. 00:36:51.007 [2024-12-14 03:18:06.101837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.101857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.102010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.102031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.102272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.102324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.102534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.102565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.102746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.102777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.102980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.103011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.103133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.103164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.103347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.103379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.103565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.103596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.103726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.103756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.104016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.104047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.104172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.104193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.104363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.104385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.104600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.104622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.104780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.104802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.104977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.105008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.105197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.105228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.105413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.105444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.105723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.105754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.106003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.106033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.106199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.106220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.106317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.106339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.106456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.106477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.106560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.106581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.106673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.106694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.106871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.106893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.107129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.107150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.107254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.107275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.107392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.107414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.107589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.107610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.107709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.107730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.107810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.107829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.107917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.107938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.108033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.108054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.108217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.008 [2024-12-14 03:18:06.108238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.008 qpair failed and we were unable to recover it. 00:36:51.008 [2024-12-14 03:18:06.108392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.108414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.108504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.108525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.108689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.108710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.108811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.108832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.108997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.109018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.109179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.109200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.109377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.109409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.109528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.109559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.109728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.109759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.109948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.109968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.110113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.110153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.110280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.110310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.110496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.110527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.110641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.110672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.110929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.110949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.111183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.111204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.111322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.111344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.111526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.111550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.111730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.111752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.111901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.111923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.112094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.112125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.112250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.112281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.112461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.112496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.112790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.112821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.112992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.113023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.113255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.113276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.113430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.113453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.113545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.113567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.113664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.113685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.113784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.113805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.114019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.114041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.114139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.114161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.114347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.114369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.114529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.114550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.114660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.114684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.009 [2024-12-14 03:18:06.114784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.009 [2024-12-14 03:18:06.114805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.009 qpair failed and we were unable to recover it. 00:36:51.010 [2024-12-14 03:18:06.114964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.010 [2024-12-14 03:18:06.114985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.010 qpair failed and we were unable to recover it. 00:36:51.010 [2024-12-14 03:18:06.115170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.010 [2024-12-14 03:18:06.115202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.010 qpair failed and we were unable to recover it. 00:36:51.010 [2024-12-14 03:18:06.115322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.010 [2024-12-14 03:18:06.115355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.010 qpair failed and we were unable to recover it. 00:36:51.010 [2024-12-14 03:18:06.115526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.010 [2024-12-14 03:18:06.115557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.010 qpair failed and we were unable to recover it. 00:36:51.010 [2024-12-14 03:18:06.115684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.010 [2024-12-14 03:18:06.115716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.010 qpair failed and we were unable to recover it. 00:36:51.010 [2024-12-14 03:18:06.115840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.010 [2024-12-14 03:18:06.115871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.010 qpair failed and we were unable to recover it. 00:36:51.010 [2024-12-14 03:18:06.116075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.010 [2024-12-14 03:18:06.116107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.010 qpair failed and we were unable to recover it. 00:36:51.010 [2024-12-14 03:18:06.116276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.010 [2024-12-14 03:18:06.116297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.010 qpair failed and we were unable to recover it. 00:36:51.010 [2024-12-14 03:18:06.116479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.010 [2024-12-14 03:18:06.116505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.010 qpair failed and we were unable to recover it. 00:36:51.010 [2024-12-14 03:18:06.116720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.010 [2024-12-14 03:18:06.116740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.010 qpair failed and we were unable to recover it. 00:36:51.010 [2024-12-14 03:18:06.116836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.010 [2024-12-14 03:18:06.116857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.010 qpair failed and we were unable to recover it. 00:36:51.010 [2024-12-14 03:18:06.116953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.010 [2024-12-14 03:18:06.116973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.010 qpair failed and we were unable to recover it. 00:36:51.010 [2024-12-14 03:18:06.117124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.010 [2024-12-14 03:18:06.117145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.010 qpair failed and we were unable to recover it. 00:36:51.010 [2024-12-14 03:18:06.117319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.010 [2024-12-14 03:18:06.117341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.010 qpair failed and we were unable to recover it. 00:36:51.010 [2024-12-14 03:18:06.117516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.010 [2024-12-14 03:18:06.117536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.010 qpair failed and we were unable to recover it. 00:36:51.010 [2024-12-14 03:18:06.117622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.010 [2024-12-14 03:18:06.117643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.010 qpair failed and we were unable to recover it. 00:36:51.010 [2024-12-14 03:18:06.117818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.010 [2024-12-14 03:18:06.117839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.010 qpair failed and we were unable to recover it. 00:36:51.010 [2024-12-14 03:18:06.117949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.010 [2024-12-14 03:18:06.117970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.010 qpair failed and we were unable to recover it. 00:36:51.010 [2024-12-14 03:18:06.118076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.010 [2024-12-14 03:18:06.118097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.010 qpair failed and we were unable to recover it. 00:36:51.010 [2024-12-14 03:18:06.118269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.010 [2024-12-14 03:18:06.118290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.010 qpair failed and we were unable to recover it. 00:36:51.010 [2024-12-14 03:18:06.118389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.010 [2024-12-14 03:18:06.118411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.010 qpair failed and we were unable to recover it. 00:36:51.010 [2024-12-14 03:18:06.118516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.010 [2024-12-14 03:18:06.118536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.010 qpair failed and we were unable to recover it. 00:36:51.294 [2024-12-14 03:18:06.118660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.118681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.118778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.118798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.118907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.118927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.119074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.119095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.119242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.119263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.119417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.119438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.119607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.119628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.119816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.119837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.120131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.120152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.120325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.120347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.120505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.120527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.120632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.120652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.120814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.120835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.121004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.121029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.121126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.121148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.121328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.121350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.121443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.121464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.121632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.121672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.121875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.121906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.122094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.122125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.122300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.122327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.122426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.122447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.122595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.122616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.122713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.122734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.122912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.122933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.123105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.123125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.123293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.123321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.123453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.123485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.123620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.123651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.123772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.123803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.123989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.124019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.124131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.124163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.124399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.124432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.124716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.295 [2024-12-14 03:18:06.124747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.295 qpair failed and we were unable to recover it. 00:36:51.295 [2024-12-14 03:18:06.124934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.124965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.125195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.125216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.125297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.125324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.125493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.125515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.125610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.125631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.125723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.125744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.125904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.125925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.126028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.126049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.126214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.126234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.126329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.126350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.126431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.126455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.126534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.126555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.126765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.126786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.126903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.126923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.127039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.127061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.127280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.127301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.127472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.127492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.127684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.127705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.127868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.127889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.128047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.128069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.128292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.128320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.128484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.128506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.128591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.128612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.128765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.128786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.128877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.128898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.128993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.129014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.129176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.129197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.129441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.129463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.129609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.129630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.129810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.129842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.130012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.130043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.130241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.130272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.130460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.130484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.130636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.130673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.130861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.130893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.131079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.131109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.131278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.131317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.131578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.296 [2024-12-14 03:18:06.131599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.296 qpair failed and we were unable to recover it. 00:36:51.296 [2024-12-14 03:18:06.131708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.131729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.131881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.131901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.132087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.132118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.132240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.132272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.132421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.132454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.132704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.132735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.132854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.132885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.133061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.133093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.133280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.133311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.133487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.133512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.133701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.133732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.133907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.133938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.134109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.134141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.134319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.134341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.134513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.134534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.134699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.134720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.134934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.134955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.135192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.135213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.135360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.135384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.135490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.135511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.135669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.135690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.135907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.135929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.136132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.136162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.136436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.136468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.136724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.136756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.136877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.136908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.137094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.137125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.137239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.137271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.137462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.137484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.137629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.137650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.137806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.137827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.137922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.137943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.138157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.138177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.138293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.138318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.138474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.138495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.138595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.138616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.138727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.138752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.138836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.138856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.139079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.139110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.297 qpair failed and we were unable to recover it. 00:36:51.297 [2024-12-14 03:18:06.139295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.297 [2024-12-14 03:18:06.139336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.139513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.139544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.139712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.139742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.139912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.139942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.140225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.140246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.140362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.140384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.140609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.140631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.140849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.140870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.140967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.140988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.141166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.141187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.141288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.141309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.141424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.141444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.141527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.141549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.141707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.141728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.141878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.141899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.142069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.142091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.142334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.142355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.142437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.142457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.142648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.142669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.142889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.142910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.143071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.143092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.143333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.143354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.143457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.143478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.143653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.143674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.143897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.143918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.144016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.144037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.144223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.144243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.144351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.144373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.144471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.144493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.144600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.144621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.144783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.144805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.144900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.144921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.145010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.145031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.145194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.145215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.145385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.145406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.145486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.145507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.145669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.145690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.145805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.145826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.146039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.146108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.146344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.298 [2024-12-14 03:18:06.146383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.298 qpair failed and we were unable to recover it. 00:36:51.298 [2024-12-14 03:18:06.146518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.146550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.146656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.146680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.146765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.146786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.146934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.146954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.147172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.147193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.147370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.147391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.147470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.147489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.147585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.147606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.147713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.147734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.147955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.147977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.148072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.148094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.148253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.148272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.148516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.148539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.148644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.148664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.148761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.148781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.149022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.149044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.149145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.149166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.149264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.149285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.149410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.149432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.149584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.149604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.149779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.149801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.149914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.149935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.150093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.150113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.150195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.150216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.150436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.150458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.150638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.150659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.150882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.150903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.151149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.151170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.151328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.151349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.151497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.151518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.151624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.151645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.151888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.151909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.152149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.152170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.299 [2024-12-14 03:18:06.152333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.299 [2024-12-14 03:18:06.152354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.299 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.152441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.152462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.152676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.152697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.152855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.152875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.153105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.153126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.153223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.153244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.153347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.153368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.153530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.153550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.153729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.153749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.153838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.153859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.154075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.154097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.154181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.154202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.154286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.154306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.154467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.154489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.154661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.154681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.154867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.154888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.155044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.155065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.155141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.155160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.155336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.155358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.155543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.155571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.155731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.155752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.156000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.156020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.156238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.156258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.156495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.156516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.156679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.156699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.156810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.156831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.156944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.156964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.157133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.157154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.157353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.157385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.157574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.157606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.157782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.157813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.158001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.158032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.158226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.158264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.158416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.158437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.158534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.158554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.158651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.158673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.158923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.158945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.159189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.159220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.159394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.159426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.159599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.300 [2024-12-14 03:18:06.159631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.300 qpair failed and we were unable to recover it. 00:36:51.300 [2024-12-14 03:18:06.159884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.159914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.160091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.160121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.160234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.160265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.160461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.160483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.160695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.160716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.160943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.160975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.161156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.161193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.161451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.161483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.161620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.161641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.161794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.161815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.161923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.161944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.162117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.162148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.162330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.162362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.162574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.162606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.162802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.162833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.163017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.163048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.163219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.163249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.163461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.163483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.163595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.163616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.163808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.163829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.163992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.164013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.164181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.164202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.164373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.164395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.164579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.164599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.164712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.164735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.164982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.165012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.165191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.165222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.165419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.165451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.165660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.165681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.165918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.165939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.166159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.166190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.166383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.166415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.166533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.166564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.166805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.166835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.167016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.167047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.167324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.167360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.167625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.167646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.167739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.167760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.167858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.167877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.301 qpair failed and we were unable to recover it. 00:36:51.301 [2024-12-14 03:18:06.168041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.301 [2024-12-14 03:18:06.168062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.168212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.168232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.168379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.168401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.168628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.168649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.168730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.168749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.168913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.168952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.169074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.169105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.169220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.169252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.169551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.169584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.169764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.169794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.169965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.169995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.170232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.170263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.170399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.170420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.170571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.170592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.170811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.170832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.170996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.171016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.171173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.171193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.171407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.171428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.171529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.171550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.171661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.171681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.171782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.171801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.171967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.171988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.172095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.172116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.172202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.172222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.172305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.172332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.172416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.172436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.172520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.172539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.172759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.172790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.172914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.172946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.173069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.173100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.173267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.173288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.173460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.173482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.173637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.173658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.173899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.173920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.174095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.174116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.174271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.174327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.174542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.174572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.174708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.174739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.174916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.174947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.175144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.302 [2024-12-14 03:18:06.175174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.302 qpair failed and we were unable to recover it. 00:36:51.302 [2024-12-14 03:18:06.175373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.175405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.175649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.175681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.175856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.175886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.176119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.176150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.176271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.176302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.176415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.176446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.176587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.176617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.176782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.176813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.176923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.176953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.177127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.177148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.177389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.177411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.177504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.177524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.177626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.177646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.177750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.177770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.177986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.178006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.178193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.178214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.178330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.178350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.178607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.178628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.178776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.178797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.178897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.178918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.179098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.179120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.179200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.179220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.179397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.179423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.179657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.179679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.179773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.179794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.179946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.179966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.180221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.180242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.180324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.180345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.180441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.180461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.180703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.180724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.180964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.180986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.181133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.181153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.181336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.181358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.181459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.181480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.181671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.181691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.181862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.181893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.182030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.182061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.182296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.182337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.303 [2024-12-14 03:18:06.182463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.303 [2024-12-14 03:18:06.182485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.303 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.182656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.182676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.182759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.182778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.182929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.182950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.183061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.183082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.183328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.183350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.183466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.183486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.183588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.183609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.183823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.183844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.183927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.183946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.184027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.184047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.184284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.184308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.184531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.184553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.184731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.184751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.184908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.184930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.185074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.185096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.185255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.185276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.185449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.185489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.185608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.185638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.185897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.185929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.186106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.186128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.186236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.186256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.186411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.186432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.186656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.186678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.186774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.186795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.186903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.186924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.187086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.187106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.187327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.187349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.187494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.187514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.187613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.187636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.187855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.187876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.188042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.188063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.188233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.188254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.188414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.188452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.188638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.188670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.304 [2024-12-14 03:18:06.188802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.304 [2024-12-14 03:18:06.188832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.304 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.188950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.188981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.189155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.189177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.189261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.189281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.189468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.189490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.189650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.189672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.189840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.189860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.190043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.190074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.190336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.190368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.190542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.190573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.190761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.190792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.190979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.191010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.191124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.191158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.191257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.191278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.191432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.191454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.191551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.191571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.191814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.191854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.192035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.192067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.192250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.192281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.192409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.192441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.192552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.192573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.192655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.192674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.192932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.192953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.193106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.193128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.193235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.193256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.193412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.193434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.193545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.193566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.193663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.193682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.193886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.193917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.194104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.194135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.194254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.194284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.194415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.194471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.194586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.194607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.194694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.194714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.194814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.194835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.194985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.195006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.195165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.195186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.195295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.195323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.195474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.195494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.305 qpair failed and we were unable to recover it. 00:36:51.305 [2024-12-14 03:18:06.195679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.305 [2024-12-14 03:18:06.195701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.195798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.195819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.195971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.195992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.196156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.196177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.196289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.196310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.196484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.196509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.196626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.196648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.196751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.196772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.196936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.196957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.197113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.197134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.197283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.197304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.197545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.197578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.197830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.197860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.198050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.198081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.198252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.198284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.198536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.198568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.198830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.198861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.198984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.199015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.199185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.199205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.199415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.199447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.199693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.199725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.199906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.199937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.200114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.200144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.200337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.200369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.200613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.200645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.200769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.200801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.200971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.201002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.201171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.201192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.201303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.201345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.201586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.201606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.201695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.201716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.201884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.201904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.201994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.202020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.202205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.202226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.202307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.202333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.202512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.202553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.202827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.202857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.203032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.203063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.203250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.203271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.203380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.306 [2024-12-14 03:18:06.203401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.306 qpair failed and we were unable to recover it. 00:36:51.306 [2024-12-14 03:18:06.203494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.203515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.203753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.203775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.203992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.204013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.204249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.204271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.204512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.204534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.204697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.204718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.204944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.204966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.205131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.205174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.205292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.205332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.205461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.205493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.205680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.205711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.205917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.205948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.206141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.206176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.206305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.206350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.206529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.206560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.206662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.206682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.206848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.206869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.207059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.207080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.207251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.207282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.207406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.207439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.207714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.207746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.207915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.207946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.208135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.208166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.208275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.208306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.208511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.208542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.208720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.208752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.208873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.208904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.209186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.209218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.209385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.209418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.209551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.209582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.209685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.209715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.209898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.209928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.210110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.210141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.210325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.210347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.210447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.210467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.210568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.210588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.210803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.210824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.211048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.211069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.211165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.211184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.307 qpair failed and we were unable to recover it. 00:36:51.307 [2024-12-14 03:18:06.211295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.307 [2024-12-14 03:18:06.211322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.211479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.211500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.211669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.211690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.211787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.211808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.211907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.211928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.212011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.212032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.212204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.212226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.212400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.212421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.212508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.212528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.212626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.212647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.212762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.212783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.212874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.212894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.213007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.213029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.213125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.213146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.213245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.213265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.213350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.213371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.213468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.213488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.213592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.213612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.213763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.213784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.213881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.213903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.214071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.214092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.214271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.214295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.214455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.214476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.214622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.214642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.214738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.214767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.214935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.214956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.215052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.215071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.215152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.215172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.215335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.215357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.215599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.215631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.215804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.215835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.216025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.216057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.216320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.216342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.216507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.216527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.216608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.216628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.308 [2024-12-14 03:18:06.216782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.308 [2024-12-14 03:18:06.216804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.308 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.216896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.216920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.217077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.217099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.217242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.217263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.217368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.217390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.217575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.217596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.217687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.217707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.217942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.217963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.218070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.218091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.218184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.218203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.218311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.218338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.218506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.218527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.218739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.218760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.218858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.218882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.218985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.219006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.219098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.219117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.219225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.219246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.219409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.219431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.219515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.219534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.219678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.219699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.219856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.219877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.220041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.220071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.220255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.220286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.220534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.220565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.220750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.220771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.220955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.220986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.221280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.221309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.221608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.221631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.221780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.221801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.221899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.221920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.222103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.222124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.222364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.222386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.222467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.222487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.222580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.222599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.222691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.222711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.222799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.222821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.222924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.222945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.223097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.223118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.223293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.223318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.309 [2024-12-14 03:18:06.223406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.309 [2024-12-14 03:18:06.223426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.309 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.223589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.223615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.223709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.223730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.223880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.223901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.224069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.224090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.224252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.224273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.224512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.224534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.224650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.224671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.224754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.224774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.224861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.224882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.225054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.225076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.225304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.225346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.225562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.225584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.225743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.225764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.225996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.226017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.226244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.226266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.226508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.226531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.226723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.226744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.226902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.226924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.227037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.227058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.227151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.227171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.227355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.227377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.227547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.227568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.227745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.227766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.227862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.227883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.228046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.228067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.228253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.228274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.228385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.228405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.228596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.228618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.228708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.228730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.228912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.228933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.229105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.229127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.229368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.229389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.229564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.229586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.229694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.229715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.229819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.229840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.230004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.230025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.230208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.230228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.230394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.230416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.230593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.310 [2024-12-14 03:18:06.230614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.310 qpair failed and we were unable to recover it. 00:36:51.310 [2024-12-14 03:18:06.230780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.230800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.230909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.230931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.231075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.231128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.231279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.231324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.231461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.231499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.231689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.231722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.231995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.232029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.232166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.232201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.232329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.232354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.232461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.232483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.232583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.232604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.232771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.232791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.232939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.232960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.233054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.233075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.233228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.233249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.233424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.233446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.233598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.233619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.233765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.233787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.234030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.234051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.234210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.234231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.234321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.234342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.234507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.234529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.234643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.234663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.234821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.234843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.235081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.235102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.235255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.235276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.235419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.235441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.235607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.235628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.235789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.235810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.235969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.235994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.236083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.236103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.236252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.236272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.236357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.236379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.236541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.236562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.236781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.236802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.237062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.237083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.237251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.237272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.237437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.237458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.237577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.237598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.311 [2024-12-14 03:18:06.237761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.311 [2024-12-14 03:18:06.237783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.311 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.237876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.237896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.237989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.238011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.238175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.238196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.238350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.238371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.238466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.238487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.238583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.238603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.238705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.238727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.238835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.238856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.238941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.238962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.239046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.239067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.239287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.239308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.239425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.239447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.239609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.239630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.239726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.239747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.239903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.239924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.240016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.240037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.240138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.240163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.240326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.240348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.240448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.240469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.240571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.240592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.240673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.240694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.240840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.240861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.241022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.241044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.241191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.241212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.241384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.241405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.241507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.241528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.241636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.241656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.241817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.241839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.241922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.241942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.242086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.242107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.242193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.242214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.242294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.242320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.242401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.242421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.242576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.242596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.242743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.242764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.242921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.312 [2024-12-14 03:18:06.242942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.312 qpair failed and we were unable to recover it. 00:36:51.312 [2024-12-14 03:18:06.243190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.243212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.243326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.243348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.243445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.243466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.243627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.243649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.243823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.243844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.244069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.244090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.244189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.244210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.244287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.244319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.244409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.244430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.244645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.244666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.244819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.244840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.245028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.245049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.245140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.245161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.245380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.245402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.245616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.245637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.245900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.245920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.246159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.246180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.246278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.246299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.246473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.246494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.246653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.246674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.246767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.246787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.246888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.246909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.247092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.247112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.247210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.247231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.247446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.247467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.247618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.247638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.247784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.247805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.247902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.247922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.248166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.248187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.248366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.248388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.248533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.248554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.248654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.248675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.248850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.248871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.248983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.249003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.249241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.249262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.249487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.249510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.249728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.249749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.249842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.313 [2024-12-14 03:18:06.249865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.313 qpair failed and we were unable to recover it. 00:36:51.313 [2024-12-14 03:18:06.250036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.250057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.250272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.250293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.250455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.250478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.250696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.250717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.250808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.250829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.250986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.251007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.251180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.251202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.251352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.251373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.251453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.251473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.251657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.251677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.251851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.251921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.252126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.252161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.252284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.252325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.252581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.252607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.252772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.252793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.252941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.252962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.253187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.253208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.253309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.253352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.253534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.253555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.253714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.253735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.253895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.253915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.254079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.254100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.254266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.254287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.254441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.254463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.254616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.254637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.254801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.254822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.254985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.255006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.255167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.255187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.255356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.255378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.255535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.255556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.255796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.255817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.255909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.255929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.256042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.256062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.256228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.256249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.256400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.256422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.256636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.256656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.256804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.256825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.256923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.256948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.257130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.257150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.314 [2024-12-14 03:18:06.257258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.314 [2024-12-14 03:18:06.257278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.314 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.257389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.257410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.257495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.257515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.257677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.257697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.257846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.257867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.258102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.258123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.258285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.258306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.258393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.258413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.258523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.258544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.258695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.258716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.258821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.258842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.258990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.259010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.259093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.259116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.259212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.259233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.259322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.259343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.259491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.259512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.259731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.259752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.259861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.259881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.260035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.260056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.260224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.260245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.260414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.260436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.260653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.260674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.260763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.260784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.260934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.260955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.261115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.261135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.261213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.261240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.261482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.261503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.261667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.261687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.261926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.261947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.262113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.262134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.262357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.262378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.262527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.262548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.262717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.262738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.262947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.262968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.263129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.263150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.263389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.263410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.263573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.263594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.263774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.263805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.264043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.315 [2024-12-14 03:18:06.264073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.315 qpair failed and we were unable to recover it. 00:36:51.315 [2024-12-14 03:18:06.264251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.264281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.264462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.264484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.264670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.264691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.264863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.264887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.265051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.265071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.265189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.265209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.265451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.265484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.265598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.265629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.265823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.265854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.265972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.266002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.266191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.266221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.266341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.266373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.266559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.266580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.266815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.266844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.267000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.267021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.267217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.267238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.267387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.267409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.267572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.267615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.267740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.267771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.268037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.268067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.268281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.268321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.268495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.268526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.268755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.268785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.268918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.268949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.269076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.269106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.269252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.269272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.269533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.269555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.269713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.269735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.269891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.269911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.270002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.270022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.270219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.270240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.270414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.270436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.270587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.270608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.270882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.270903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.270987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.271008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.271112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.271135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.271253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.271274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.271459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.271481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.271630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.271651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.316 qpair failed and we were unable to recover it. 00:36:51.316 [2024-12-14 03:18:06.271821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.316 [2024-12-14 03:18:06.271862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.272084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.272114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.272233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.272265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.272397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.272428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.272624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.272645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.272804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.272824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.272975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.272995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.273165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.273186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.273339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.273360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.273505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.273526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.273674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.273695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.273813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.273834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.273997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.274018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.274102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.274123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.274341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.274363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.274447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.274467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.274615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.274636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.274732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.274752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.274917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.274939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.275085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.275106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.275273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.275311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.275560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.275590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.275698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.275728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.275906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.275937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.276120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.276151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.276356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.276378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.276478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.276497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.276655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.276676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.276778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.276798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.277030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.277062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.277246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.277277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.277460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.277492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.277749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.277770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.277883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.277903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.278055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.317 [2024-12-14 03:18:06.278076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.317 qpair failed and we were unable to recover it. 00:36:51.317 [2024-12-14 03:18:06.278182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.278203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.278298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.278326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.278425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.278446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.278600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.278620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.278777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.278798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.278893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.278914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.278992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.279011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.279094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.279118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.279278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.279300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.279390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.279409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.279643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.279664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.279746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.279766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.279870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.279891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.279974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.279993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.280111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.280132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.280290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.280311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.280396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.280417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.280576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.280598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.280816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.280838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.281005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.281027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.281186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.281207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.281382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.281405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.281485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.281505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.281600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.281621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.281779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.281801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.281884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.281906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.282127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.282149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.282324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.282346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.282448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.282469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.282573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.282594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.282779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.282800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.282947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.282967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.283112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.283133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.283290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.283316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.283419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.283443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.283586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.283607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.283695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.283715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.283867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.283888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.283972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.283993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.318 [2024-12-14 03:18:06.284090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.318 [2024-12-14 03:18:06.284110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.318 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.284192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.284212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.284384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.284405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.284553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.284574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.284674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.284694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.284900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.284921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.285078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.285099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.285276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.285296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.285389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.285410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.285649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.285671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.285837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.285858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.286043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.286064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.286328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.286350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.286626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.286647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.286763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.286783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.286956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.286976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.287137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.287159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.287329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.287351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.287596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.287617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.287796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.287817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.287905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.287926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.288020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.288041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.288280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.288300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.288407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.288429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.288545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.288565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.288729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.288750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.288843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.288864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.288958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.288978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.289130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.289151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.289238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.289259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.289360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.289381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.289551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.289572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.289670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.289691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.289855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.289876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.290024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.290044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.290195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.290216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.290370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.290391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.290485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.290506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.290619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.319 [2024-12-14 03:18:06.290640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.319 qpair failed and we were unable to recover it. 00:36:51.319 [2024-12-14 03:18:06.290793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.290815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.290898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.290919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.291093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.291114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.291214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.291236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.291389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.291411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.291491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.291512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.291664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.291685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.291833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.291854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.291952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.291973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.292068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.292089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.292270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.292291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.292453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.292474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.292624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.292645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.292803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.292823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.292987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.293008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.293181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.293202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.293285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.293306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.293514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.293535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.293701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.293721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.293879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.293900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.294073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.294094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.294243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.294264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.294413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.294435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.294517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.294537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.294759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.294784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.294939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.294959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.295177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.295198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.295380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.295406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.295569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.295590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.295683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.295704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.295869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.295890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.296055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.296076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.296236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.296257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.296447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.296468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.296568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.296589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.320 qpair failed and we were unable to recover it. 00:36:51.320 [2024-12-14 03:18:06.296688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.320 [2024-12-14 03:18:06.296709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.296882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.296902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.297000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.297021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.297123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.297144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.297243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.297264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.297364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.297386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.297551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.297572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.297744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.297765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.297939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.297960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.298045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.298066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.298169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.298190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.298340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.298362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.298546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.298568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.298657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.298676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.298827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.298848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.299096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.299117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.299202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.299225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.299403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.299425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.299576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.299598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.299697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.299718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.299867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.299887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.299988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.300009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.300167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.300187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.300351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.300372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.300455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.300477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.300633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.300654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.300760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.300781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.300999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.301020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.301207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.301228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.301443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.301466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.301563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.301584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.301826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.301847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.302016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.302037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.302140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.302161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.302400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.302421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.321 [2024-12-14 03:18:06.302579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.321 [2024-12-14 03:18:06.302600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.321 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.302755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.302775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.302864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.302885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.303035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.303056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.303161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.303182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.303334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.303356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.303593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.303615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.303706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.303727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.303819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.303844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.303991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.304012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.304163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.304184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.304361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.304383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.304534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.304554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.304699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.304719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.304883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.304903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.304997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.305017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.305110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.305131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.305302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.305329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.305511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.305532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.305608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.305632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.305801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.305822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.305982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.306002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.306118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.306139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.306286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.306307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.306424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.306445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.306542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.306563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.306791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.306813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.307075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.307096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.307195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.307216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.307443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.307465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.307572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.307593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.322 [2024-12-14 03:18:06.307821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.322 [2024-12-14 03:18:06.307841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.322 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.308003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.308023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.308166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.308186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.308271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.308292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.308388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.308409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.308508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.308530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.308677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.308698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.308844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.308865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.309011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.309031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.309180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.309201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.309362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.309384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.309478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.309499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.309599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.309620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.309879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.309900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.310055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.310076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.310181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.310202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.310322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.310343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.310510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.310531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.310623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.310644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.310745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.310766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.310914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.310935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.311096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.311117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.311278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.311299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.311470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.311492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.311582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.311602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.311756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.311776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.311860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.311879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.312097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.312118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.312202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.312223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.312397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.312419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.312526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.312547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.312629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.312649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.312745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.312765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.312923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.312944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.313088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.313108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.323 qpair failed and we were unable to recover it. 00:36:51.323 [2024-12-14 03:18:06.313275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.323 [2024-12-14 03:18:06.313295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.313401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.313422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.313514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.313535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.313726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.313746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.313846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.313867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.313984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.314005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.314111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.314131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.314294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.314320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.314424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.314445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.314543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.314563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.314647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.314672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.314851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.314871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.315038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.315061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.315139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.315160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.315322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.315344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.315462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.315483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.315636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.315656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.315758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.315778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.316001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.316021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.316186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.316206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.316373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.316394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.316578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.316598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.316689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.316710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.316818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.316839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.316998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.317019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.317190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.317211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.317374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.317394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.317474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.317493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.317730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.317751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.317831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.317852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.317951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.317972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.318120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.318140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.318245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.318265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.318361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.318382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.318478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.318500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.318606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.318628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.324 [2024-12-14 03:18:06.318730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.324 [2024-12-14 03:18:06.318751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.324 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.318844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.318869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.319095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.319116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.319216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.319236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.319335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.319357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.319450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.319471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.319619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.319639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.319797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.319817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.319976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.319996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.320154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.320175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.320283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.320304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.320488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.320510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.320714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.320734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.320844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.320864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.321021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.321042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.321305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.321334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.321428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.321449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.321667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.321687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.321780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.321801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.321964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.321984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.322152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.322172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.322335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.322357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.322574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.322595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.322690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.322710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.322878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.322899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.323060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.323081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.323166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.323186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.323301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.323334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.323433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.323454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.323557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.323578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.323758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.323780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.323869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.323889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.324041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.324063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.324231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.324252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.324357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.325 [2024-12-14 03:18:06.324378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.325 qpair failed and we were unable to recover it. 00:36:51.325 [2024-12-14 03:18:06.324495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.324516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.324696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.324716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.324930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.324950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.325060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.325081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.325231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.325251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.325401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.325422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.325640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.325661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.325742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.325763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.325950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.325970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.326140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.326161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.326329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.326350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.326516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.326536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.326623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.326644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.326834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.326855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.327033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.327054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.327157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.327178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.327332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.327352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.327447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.327469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.327556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.327577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.327788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.327809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.327929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.327950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.328124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.328145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.328254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.328275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.328448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.328469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.328554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.328575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.328808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.328828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.328908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.328928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.329090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.329162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.329308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.329361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.329563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.329595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.329804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.329835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.330095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.330126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.330259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.330290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.330434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.330465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.330661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.326 [2024-12-14 03:18:06.330693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.326 qpair failed and we were unable to recover it. 00:36:51.326 [2024-12-14 03:18:06.330814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.330837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.331078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.331099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.331259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.331281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.331450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.331470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.331684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.331705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.331801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.331822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.332037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.332058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.332335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.332357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.332439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.332460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.332571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.332591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.332698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.332719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.332881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.332902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.333115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.333135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.333242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.333264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.333420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.333441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.333529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.333550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.333664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.333684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.333842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.333864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.334014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.334035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.334195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.334216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.334325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.334346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.334443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.334464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.334618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.334639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.334822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.334843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.335005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.335025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.335117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.335139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.335244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.335269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.335423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.335443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.335541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.335561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.335731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.335752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.327 [2024-12-14 03:18:06.335849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.327 [2024-12-14 03:18:06.335870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.327 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.335961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.335982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.336065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.336085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.336182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.336203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.336290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.336311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.336542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.336564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.336653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.336674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.336779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.336799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.336977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.336998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.337148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.337170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.337281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.337303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.337398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.337418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.337530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.337551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.337673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.337694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.337786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.337807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.337957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.337978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.338193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.338216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.338390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.338413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.338589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.338612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.338779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.338801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.338976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.338999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.339108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.339129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.339233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.339255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.339404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.339431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.339537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.339557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.339651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.339674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.339831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.339854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.339959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.339980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.340082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.340103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.340199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.340220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.340369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.340392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.340635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.340659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.340900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.340923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.341071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.341094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.341263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.341286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.328 [2024-12-14 03:18:06.341480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.328 [2024-12-14 03:18:06.341503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.328 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.341673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.341696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.341918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.341941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.342102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.342125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.342298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.342328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.342508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.342531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.342750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.342773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.342885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.342907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.343000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.343021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.343178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.343200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.343347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.343371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.343557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.343580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.343699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.343721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.343872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.343894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.344115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.344137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.344309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.344337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.344423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.344443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.344633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.344653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.344845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.344865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.345028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.345047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.345297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.345322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.345511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.345532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.345721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.345740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.345915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.345934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.346083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.346103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.346261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.346281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.346492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.346513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.346619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.346639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.346851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.346872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.347035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.347056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.347139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.347159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.347321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.347342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.347494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.347515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.347624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.347645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.347740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.347759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.347844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.329 [2024-12-14 03:18:06.347865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.329 qpair failed and we were unable to recover it. 00:36:51.329 [2024-12-14 03:18:06.348038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.348058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.348211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.348231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.348411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.348434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.348628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.348649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.348866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.348887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.349039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.349060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.349229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.349251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.349409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.349430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.349650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.349671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.349787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.349808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.349968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.349989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.350081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.350103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.350289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.350311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.350497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.350519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.350620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.350641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.350862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.350884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.351054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.351075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.351183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.351205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.351366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.351388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.351555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.351576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.351731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.351753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.351862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.351882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.352040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.352061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.352244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.352266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.352369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.352393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.352561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.352583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.352671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.352694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.352871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.352893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.353011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.353034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.353188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.353211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.353363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.353386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.353550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.353573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.353733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.353757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.353910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.353933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.354053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.330 [2024-12-14 03:18:06.354076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.330 qpair failed and we were unable to recover it. 00:36:51.330 [2024-12-14 03:18:06.354346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.354371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.354484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.354506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.354604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.354626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.354882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.354903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.355141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.355164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.355322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.355345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.355496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.355518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.355623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.355646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.355796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.355817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.356041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.356064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.356278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.356300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.356399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.356422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.356573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.356599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.356698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.356720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.356878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.356900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.357148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.357169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.357328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.357352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.357453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.357475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.357669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.357691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.357792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.357815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.357899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.357922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.358088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.358110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.358342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.358365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.358466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.358489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.358576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.358597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.358758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.358779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.358871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.358893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.359122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.359144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.359293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.359321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.359417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.359440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.359541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.359562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.359730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.359753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.359857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.359879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.360106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.360128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.360278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.331 [2024-12-14 03:18:06.360300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.331 qpair failed and we were unable to recover it. 00:36:51.331 [2024-12-14 03:18:06.360556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.360579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.360750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.360772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.360922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.360945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.361050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.361072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.361220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.361247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.361401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.361424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.361538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.361560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.361713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.361735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.361904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.361925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.362035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.362058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.362153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.362176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.362331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.362355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.362507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.362530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.362719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.362742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.362835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.362858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.362963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.362985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.363089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.363111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.363273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.363295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.363460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.363483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.363673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.363696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.363802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.363824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.363992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.364015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.364098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.364120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.364292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.364319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.364468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.364491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.364659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.364681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.364835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.364856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.365013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.365037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.365279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.365300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.365402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.365425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.365589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.332 [2024-12-14 03:18:06.365612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.332 qpair failed and we were unable to recover it. 00:36:51.332 [2024-12-14 03:18:06.365763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.365785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.365872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.365895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.365985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.366008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.366228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.366250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.366345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.366370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.366554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.366577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.366728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.366751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.366911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.366933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.367118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.367141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.367384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.367408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.367647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.367669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.367937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.367959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.368125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.368148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.368298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.368326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.368486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.368510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.368616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.368639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.368806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.368829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.368940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.368962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.369110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.369132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.369224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.369247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.369405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.369428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.369515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.369538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.369623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.369646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.369796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.369819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.369982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.370005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.370168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.370191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.370344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.370367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.370608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.370630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.370785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.370808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.371024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.371046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.371159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.371182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.371339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.371363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.371511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.371534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.371685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.371708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.371890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.371914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.372031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.333 [2024-12-14 03:18:06.372053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.333 qpair failed and we were unable to recover it. 00:36:51.333 [2024-12-14 03:18:06.372203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.372225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.372479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.372502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.372597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.372621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.372785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.372808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.372974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.372997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.373275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.373305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.373464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.373487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.373583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.373606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.373764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.373787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.373978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.374000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.374102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.374125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.374301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.374339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.374441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.374463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.374576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.374599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.374777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.374800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.375022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.375045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.375126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.375149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.375257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.375279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.375507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.375531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.375625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.375648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.375756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.375779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.375889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.375912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.376067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.376090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.376263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.376286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.376379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.376403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.376484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.376507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.376608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.376630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.376714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.376735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.376920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.376942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.377023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.377046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.377207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.377229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.377382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.377409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.377530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.377557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.377752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.377776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.377864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.334 [2024-12-14 03:18:06.377887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.334 qpair failed and we were unable to recover it. 00:36:51.334 [2024-12-14 03:18:06.377966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.377987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.378161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.378184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.378333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.378356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.378526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.378549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.378707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.378730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.378899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.378922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.379090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.379112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.379217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.379239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.379351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.379374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.379540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.379563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.379723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.379745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.379860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.379882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.379973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.379996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.380091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.380114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.380316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.380340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.380444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.380466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.380633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.380656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.380815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.380837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.380935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.380958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.381127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.381150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.381252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.381274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.381376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.381400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.381569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.381592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.381693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.381716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.381971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.381997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.382097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.382120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.382232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.382255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.382421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.382446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.382609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.382632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.382784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.382807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.382970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.382993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.383214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.383236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.383335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.383359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.383521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.335 [2024-12-14 03:18:06.383544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.335 qpair failed and we were unable to recover it. 00:36:51.335 [2024-12-14 03:18:06.383709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.383731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.383959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.383982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.384131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.384154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.384246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.384269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.384439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.384466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.384562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.384585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.384667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.384687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.384861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.384885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.385046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.385068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.385177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.385199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.385317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.385340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.385496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.385519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.385624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.385647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.385743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.385766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.385914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.385937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.386025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.386046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.386154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.386176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.386401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.386425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.386647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.386671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.386834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.386857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.387028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.387050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.387201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.387223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.387330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.387354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.387530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.387553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.387716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.387738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.387993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.388017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.388112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.388135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.388238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.388261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.388483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.388507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.388605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.388628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.388812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.388834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.389003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.389026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.389112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.389134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.389322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.389346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.389498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.336 [2024-12-14 03:18:06.389520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.336 qpair failed and we were unable to recover it. 00:36:51.336 [2024-12-14 03:18:06.389620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.389642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.389876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.389899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.390000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.390022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.390260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.390283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.390384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.390408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.390628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.390651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.390833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.390855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.390950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.390972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.391133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.391157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.391328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.391351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.391520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.391544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.391630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.391651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.391871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.391894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.392079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.392102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.392201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.392225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.392391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.392414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.392590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.392613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.392783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.392806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.392956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.392979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.393189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.393211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.393366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.393390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.393499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.393523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.393691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.393714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.393929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.393955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.394126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.394150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.394405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.394428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.394617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.394639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.394735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.394758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.394930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.394953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.395052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.395075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.395244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.395266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.395418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.395441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.395542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.395564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.337 [2024-12-14 03:18:06.395651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.337 [2024-12-14 03:18:06.395674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.337 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.395829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.395852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.396045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.396067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.396293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.396321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.396435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.396458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.396544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.396567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.396658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.396681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.396832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.396855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.397015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.397038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.397139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.397162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.397333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.397357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.397444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.397467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.397634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.397657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.397819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.397842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.397993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.398016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.398097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.398118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.398208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.398230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.398343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.398371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.398637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.398660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.398905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.398928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.399029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.399052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.399274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.399297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.399468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.399491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.399740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.399763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.399847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.399869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.399980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.400003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.400182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.400204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.400305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.400334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.400488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.400511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.400608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.400631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.400828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.400851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.400956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.400979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.401198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.401220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.401384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.401408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.401515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.401538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.401704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.401726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.401887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.401910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.402058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.402082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.338 [2024-12-14 03:18:06.402231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.338 [2024-12-14 03:18:06.402254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.338 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.402354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.402375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.402525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.402547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.402747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.402769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.402874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.402896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.402998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.403021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.403282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.403304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.403572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.403595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.403695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.403718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.403948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.403971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.404168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.404192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.404342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.404366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.404550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.404573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.404741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.404764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.404953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.404977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.405128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.405150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.405303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.405333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.405432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.405453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.405609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.405632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.405719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.405740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.405925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4c70 is same with the state(6) to be set 00:36:51.630 [2024-12-14 03:18:06.406182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.406250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.406534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.406606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.406779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.406849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.407074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.407102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.407209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.407233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.407474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.407497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.407648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.407670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.407757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.407780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.407874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.407896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.408072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.408094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.408257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.408280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.408382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.408405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.408655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.408678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.408812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.408854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.408981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.409015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.409156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.409188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.409320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.409345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.409442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.409464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.409705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.409728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.409817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.409840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.409956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.409980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.630 qpair failed and we were unable to recover it. 00:36:51.630 [2024-12-14 03:18:06.410150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.630 [2024-12-14 03:18:06.410172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.410286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.410309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.410559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.410582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.410679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.410702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.410851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.410874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.411026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.411048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.411209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.411233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.411335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.411359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.411521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.411544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.411630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.411652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.411867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.411890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.411997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.412020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.412130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.412152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.412246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.412269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.412360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.412382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.412600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.412623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.412785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.412808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.413056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.413079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.413301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.413329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.413424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.413450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.413603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.413626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.413833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.413856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.414021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.414043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.414216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.414239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.414391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.414416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.414505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.414526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.414624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.414647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.414735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.414758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.414846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.414869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.414973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.414996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.415220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.631 [2024-12-14 03:18:06.415244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.631 qpair failed and we were unable to recover it. 00:36:51.631 [2024-12-14 03:18:06.415411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.415434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.415585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.415608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.415835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.415858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.415964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.415988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.416088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.416111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.416204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.416227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.416320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.416342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.416501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.416523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.416696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.416719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.416881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.416904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.417132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.417154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.417310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.417339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.417528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.417550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.417648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.417671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.417773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.417796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.417959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.417986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.418097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.418120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.418269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.418292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.418548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.418571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.418720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.418743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.418908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.418931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.419019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.419039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.419211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.419234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.419346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.419370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.419517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.419540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.419763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.419786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.419867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.419888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.420002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.420024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.420180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.420202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.420369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.420393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.420554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.420578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.420685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.420708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.420796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.420818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.420901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.420924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.421093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.632 [2024-12-14 03:18:06.421116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.632 qpair failed and we were unable to recover it. 00:36:51.632 [2024-12-14 03:18:06.421275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.421298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.421522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.421544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.421764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.421787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.421883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.421905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.422057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.422080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.422234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.422256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.422431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.422454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.422548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.422574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.422742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.422765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.423004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.423026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.423202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.423224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.423375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.423400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.423549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.423573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.423815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.423839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.424079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.424102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.424209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.424232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.424434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.424457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.424564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.424587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.424747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.424770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.424983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.425006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.425115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.425138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.425317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.425341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.425428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.425448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.425628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.425651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.425831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.425854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.426089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.426112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.426221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.426244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.426349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.426372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.426494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.426517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.426619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.426642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.426811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.426833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.427024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.427047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.427139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.427162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.427330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.427354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.427522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.427545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.427645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.427667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.427897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.427920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.428158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.428180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.428398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.633 [2024-12-14 03:18:06.428421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.633 qpair failed and we were unable to recover it. 00:36:51.633 [2024-12-14 03:18:06.428570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.428594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.428700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.428723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.428807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.428831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.429047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.429069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.429175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.429198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.429358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.429382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.429536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.429558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.429721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.429743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.429899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.429923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.430097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.430120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.430222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.430244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.430343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.430366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.430461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.430484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.430575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.430597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.430836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.430858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.431071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.431094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.431239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.431261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.431425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.431449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.431652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.431674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.431891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.431914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.432000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.432020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.432184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.432207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.432305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.432333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.432560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.432583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.432821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.432843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.433011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.433035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.433187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.433209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.433363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.433387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.433493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.433517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.433664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.433687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.433854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.433878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.434025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.434048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.634 [2024-12-14 03:18:06.434263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.634 [2024-12-14 03:18:06.434286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.634 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.434453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.434477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.434577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.434598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.434685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.434707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.434925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.434952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.435055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.435078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.435189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.435212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.435440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.435463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.435620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.435643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.435745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.435767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.435864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.435887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.435998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.436021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.436180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.436203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.436295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.436324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.436426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.436449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.436552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.436575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.436789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.436812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.436979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.437001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.437109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.437132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.437220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.437243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.437343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.437367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.437471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.437494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.437574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.437597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.437680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.437703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.437859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.437882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.438031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.438054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.438202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.438225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.438326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.438350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.438509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.438529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.438629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.438648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.438887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.438907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.439004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.439027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.439112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.439132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.439307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.439334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.439428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.439448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.439600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.439620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.439844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.439864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.439967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.439987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.440154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.440174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.440283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.635 [2024-12-14 03:18:06.440304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.635 qpair failed and we were unable to recover it. 00:36:51.635 [2024-12-14 03:18:06.440426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.440447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.440552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.440572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.440667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.440687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.440841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.440861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.441010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.441031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.441200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.441220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.441300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.441328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.441481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.441502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.441693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.441714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.441865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.441885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.442032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.442052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.442148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.442168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.442251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.442271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.442518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.442540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.442702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.442723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.442819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.442840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.442929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.442950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.443102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.443122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.443268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.443297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.443466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.443488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.443589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.443610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.443768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.443789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.443895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.443916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.444026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.444048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.444142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.444165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.444328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.444352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.444520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.444542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.444713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.444735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.444822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.444844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.445012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.445034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.445125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.445147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.445306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.445333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.445521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.445544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.445741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.445763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.445984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.446006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.446176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.446199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.446399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.446422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.446515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.446537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.446647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.446670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.636 qpair failed and we were unable to recover it. 00:36:51.636 [2024-12-14 03:18:06.446757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.636 [2024-12-14 03:18:06.446779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.446960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.446982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.447153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.447175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.447342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.447365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.447529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.447552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.447776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.447799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.448065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.448088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.448197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.448220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.448380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.448403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.448553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.448576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.448726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.448749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.448897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.448919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.449023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.449046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.449143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.449164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.449326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.449349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.449504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.449526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.449693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.449715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.449821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.449843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.449993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.450015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.450175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.450197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.450428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.450455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.450555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.450578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.450680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.450702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.450806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.450829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.450986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.451008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.451107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.451129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.451242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.451265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.451358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.451381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.451538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.451560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.451714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.451737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.451822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.451845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.452039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.452062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.452216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.452239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.452407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.452430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.452515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.452537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.452717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.452740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.452911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.452934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.453092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.453114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.453215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.453238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.637 qpair failed and we were unable to recover it. 00:36:51.637 [2024-12-14 03:18:06.453340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.637 [2024-12-14 03:18:06.453363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.453581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.453604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.453702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.453725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.453887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.453910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.454079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.454101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.454198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.454220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.454323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.454347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.454500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.454523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.454740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.454767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.454927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.454950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.455113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.455136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.455389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.455412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.455561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.455584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.455815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.455838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.455999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.456023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.456121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.456143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.456288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.456327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.456568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.456591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.456756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.456779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.456926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.456950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.457034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.457057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.457170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.457193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.457353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.457377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.457544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.457566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.457724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.457746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.457911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.457934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.458108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.458130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.458290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.458319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.458480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.458502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.458656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.458679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.458778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.458800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.458952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.458974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.459076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.459098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.459325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.459349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.459500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.459522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.459603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.459629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.459729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.459751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.459858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.459880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.459964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.459985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.638 [2024-12-14 03:18:06.460084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.638 [2024-12-14 03:18:06.460106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.638 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.460275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.460297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.460480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.460503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.460653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.460676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.460852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.460874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.460968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.460990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.461145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.461166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.461402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.461425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.461516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.461539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.461707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.461730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.461903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.461925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.462150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.462173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.462414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.462438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.462595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.462617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.462773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.462795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.462948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.462971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.463073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.463095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.463264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.463287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.463465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.463487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.463581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.463605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.463695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.463717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.463829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.463852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.464019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.464043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.464256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.464279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.464506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.464529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.464694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.464716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.464800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.464823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.464917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.464941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.465030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.465053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.465295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.465327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.465490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.465514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.465678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.465701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.465928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.465951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.466127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.466150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.639 [2024-12-14 03:18:06.466367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.639 [2024-12-14 03:18:06.466391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.639 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.466587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.466611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.466856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.466880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.466974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.466998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.467149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.467172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.467334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.467358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.467511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.467534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.467631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.467655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.467832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.467855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.467975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.467998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.468091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.468115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.468268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.468291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.468504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.468576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.468870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.468940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.469153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.469190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.469407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.469443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.469626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.469657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.469804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.469837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.470024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.470050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.470217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.470240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.470335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.470359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.470457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.470479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.470586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.470609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.470781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.470804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.471043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.471066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.471231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.471254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.471430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.471453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.471698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.471721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.471835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.471858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.472009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.472031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.472154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.472190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.472334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.472368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.472544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.472576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.472809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.472834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.473041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.473063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.473233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.473256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.473417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.473441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.473530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.473551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.473656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.473678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.473829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.640 [2024-12-14 03:18:06.473852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.640 qpair failed and we were unable to recover it. 00:36:51.640 [2024-12-14 03:18:06.473954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.473977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.474138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.474161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.474250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.474271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.474481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.474504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.474599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.474620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.474823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.474893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.475032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.475066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.475296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.475351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.475482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.475506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.475598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.475619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.475771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.475794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.475954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.475976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.476137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.476159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.476245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.476265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.476363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.476387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.476473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.476493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.476591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.476614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.476788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.476814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.476974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.476996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.477222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.477246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.477344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.477365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.477469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.477492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.477596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.477618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.477858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.477880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.478062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.478085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.478304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.478332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.478480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.478503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.478606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.478628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.478823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.478845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.479019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.479042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.479206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.479229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.479384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.479408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.479558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.479581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.479675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.479696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.479801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.479824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.480007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.480031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.480255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.480278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.480389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.480413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.480581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.641 [2024-12-14 03:18:06.480604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.641 qpair failed and we were unable to recover it. 00:36:51.641 [2024-12-14 03:18:06.480756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.480779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.480875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.480897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.480987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.481009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.481161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.481183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.481350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.481373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.481624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.481654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.481815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.481837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.481933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.481956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.482215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.482239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.482423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.482446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.482544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.482567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.482747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.482770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.482985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.483007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.483113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.483135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.483296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.483326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.483487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.483509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.483724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.483746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.483894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.483917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.484139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.484162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.484372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.484397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.484555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.484578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.484834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.484857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.485024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.485047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.485166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.485189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.485353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.485376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.485556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.485579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.485736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.485759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.485979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.486002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.486118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.486141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.486381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.486405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.486507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.486529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.486633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.486656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.486751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.486777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.486996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.487018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.487171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.487194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.487367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.487391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.487556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.487579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.487745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.487767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.487920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.487943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.642 [2024-12-14 03:18:06.488108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.642 [2024-12-14 03:18:06.488131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.642 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.488215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.488237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.488393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.488417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.488636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.488658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.488897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.488919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.489159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.489181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.489369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.489393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.489634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.489658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.489876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.489898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.490003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.490026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.490124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.490146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.490305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.490333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.490425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.490447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.490540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.490563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.490652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.490673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.490776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.490799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.490948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.490971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.491083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.491105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.491200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.491221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.491398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.491421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.491589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.491612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.491708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.491731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.491887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.491910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.492089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.492111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.492261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.492285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.492549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.492572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.492672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.492695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.492856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.492878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.493043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.493066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.493165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.493187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.493341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.493365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.493460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.493484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.493645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.493668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.493755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.493776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.493987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.494058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.643 qpair failed and we were unable to recover it. 00:36:51.643 [2024-12-14 03:18:06.494281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.643 [2024-12-14 03:18:06.494329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.494451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.494485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.494648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.494673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.494779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.494802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.494893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.494915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.495134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.495156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.495269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.495292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.495473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.495496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.495595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.495617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.495781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.495804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.495892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.495912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.496078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.496101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.496344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.496368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.496482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.496503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.496673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.496696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.496849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.496871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.496963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.496985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.497155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.497178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.497294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.497323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.497539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.497563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.497747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.497769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.497882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.497907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.498008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.498031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.498197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.498220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.498325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.498349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.498609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.498632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.498745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.498772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.498956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.498979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.499138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.499161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.499338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.499362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.499529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.499553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.499730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.499752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.499842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.499863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.500012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.500034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.500132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.500155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.500343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.500366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.500446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.500468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.500636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.500658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.500811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.500833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.644 [2024-12-14 03:18:06.500993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.644 [2024-12-14 03:18:06.501015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.644 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.501109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.501131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.501230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.501253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.501344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.501367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.501480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.501502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.501717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.501739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.501885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.501908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.502066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.502090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.502180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.502203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.502376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.502400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.502569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.502592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.502864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.502886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.502993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.503016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.503107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.503129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.503292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.503325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.503564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.503586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.503684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.503707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.503890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.503912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.504073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.504095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.504198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.504221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.504434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.504459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.504635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.504657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.504753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.504776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.504870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.504893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.505059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.505082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.505249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.505271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.505379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.505403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.505628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.505650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.505823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.505846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.506063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.506086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.506181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.506202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.506365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.506388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.506568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.506590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.506785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.506808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.506980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.507003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.507211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.507234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.507326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.507350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.507470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.507493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.507605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.507628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.507714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.645 [2024-12-14 03:18:06.507735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.645 qpair failed and we were unable to recover it. 00:36:51.645 [2024-12-14 03:18:06.507822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.507843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.508045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.508072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.508250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.508272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.508441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.508465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.508548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.508570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.508723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.508745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.508862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.508884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.509053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.509076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.509297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.509326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.509416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.509439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.509605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.509627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.509813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.509836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.509998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.510020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.510198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.510221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.510377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.510400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.510567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.510590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.510762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.510785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.510935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.510958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.511130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.511153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.511354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.511377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.511481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.511504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.511654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.511677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.511835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.511858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.511938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.511960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.512043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.512067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.512179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.512202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.512372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.512396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.512499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.512521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.512680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.512702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.512877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.512900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.513079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.513101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.513262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.513284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.513379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.513402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.513620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.513644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.513809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.513832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.513993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.514016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.514164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.514187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.514339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.514362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.646 [2024-12-14 03:18:06.514471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.646 [2024-12-14 03:18:06.514494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.646 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.514670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.514692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.514844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.514867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.515048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.515071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.515226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.515298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.515476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.515514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.515637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.515670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.515862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.515894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.516005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.516037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.516300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.516346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.516445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.516471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.516578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.516601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.516722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.516745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.516839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.516863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.516954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.516976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.517170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.517193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.517358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.517382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.517626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.517649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.517827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.517850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.518077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.518099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.518268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.518291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.518450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.518473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.518577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.518601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.518864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.518887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.519045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.519077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.519255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.519288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.519470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.519502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.519726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.519758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.520009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.520043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.520243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.520266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.520354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.520378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.520558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.520581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.520822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.520845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.520959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.520982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.521084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.521107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.521190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.521210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.521359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.521382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.521488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.521511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.521668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.647 [2024-12-14 03:18:06.521692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.647 qpair failed and we were unable to recover it. 00:36:51.647 [2024-12-14 03:18:06.521854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.521876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.522029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.522053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.522234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.522257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.522494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.522518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.522739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.522762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.522921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.522944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.523213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.523236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.523388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.523412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.523567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.523589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.523682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.523704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.523908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.523930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.524099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.524122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.524231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.524254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.524363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.524387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.524471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.524491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.524643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.524666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.524843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.524866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.525014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.525036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.525195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.525218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.525432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.525459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.525686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.525708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.525948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.525971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.526075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.526095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.526266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.526288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.526385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.526406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.526625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.526648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.526761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.526783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.526955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.526977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.527076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.527099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.527343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.527366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.527549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.527572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.527789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.527812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.527907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.527930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.528102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.528124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.648 qpair failed and we were unable to recover it. 00:36:51.648 [2024-12-14 03:18:06.528213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.648 [2024-12-14 03:18:06.528234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.528336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.528358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.528518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.528541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.528696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.528719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.528879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.528902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.529025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.529049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.529144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.529165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.529339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.529363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.529449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.529470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.529556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.529577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.529679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.529699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.529781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.529802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.529952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.529979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.530144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.530166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.530260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.530282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.530441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.530464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.530576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.530600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.530695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.530717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.530866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.530889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.531048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.531070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.531293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.531322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.531511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.531534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.531629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.531650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.531747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.531770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.531944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.531967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.532115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.532136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.532237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.532258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.532415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.532439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.532528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.532548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.532652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.532676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.532893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.532916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.533065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.533089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.533274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.533297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.533462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.533485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.533644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.533667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.533833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.533855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.533948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.533969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.534163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.534186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.534411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.534435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.649 [2024-12-14 03:18:06.534606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.649 [2024-12-14 03:18:06.534629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.649 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.534723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.534744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.534911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.534933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.535091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.535113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.535346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.535369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.535529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.535552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.535702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.535725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.535899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.535922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.536093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.536116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.536331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.536354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.536523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.536545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.536640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.536661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.536771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.536793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.536903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.536926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.537089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.537111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.537337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.537359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.537599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.537622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.537837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.537860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.538025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.538047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.538140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.538161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.538326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.538349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.538512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.538534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.538772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.538794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.539021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.539043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.539137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.539158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.539326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.539350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.539444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.539466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.539636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.539659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.539817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.539839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.539932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.539955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.540042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.540063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.540223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.540245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.540355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.540377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.540546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.540568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.540652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.540673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.540767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.540789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.540957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.540979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.541129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.541151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.541333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.541356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.541577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.650 [2024-12-14 03:18:06.541600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.650 qpair failed and we were unable to recover it. 00:36:51.650 [2024-12-14 03:18:06.541792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.541815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.541983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.542009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.542161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.542184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.542402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.542425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.542599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.542621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.542802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.542825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.542975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.542997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.543144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.543214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.543437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.543475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.543601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.543635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.543750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.543774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.543866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.543888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.544129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.544152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.544329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.544353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.544434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.544455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.544610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.544634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.544799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.544822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.545046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.545068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.545161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.545183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.545348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.545371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.545544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.545566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.545714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.545737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.545837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.545861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.546049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.546072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.546243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.546265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.546352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.546372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.546588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.546610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.546778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.546800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.547025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.547051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.547138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.547160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.547310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.547341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.547509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.547531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.547692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.547715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.547880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.547903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.547996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.548017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.548203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.548225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.548444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.548468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.548569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.548591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.548698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.548719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.651 qpair failed and we were unable to recover it. 00:36:51.651 [2024-12-14 03:18:06.548817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.651 [2024-12-14 03:18:06.548838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.549059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.549082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.549244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.549267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.549379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.549403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.549564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.549587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.549673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.549693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.549806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.549828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.549921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.549943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.550158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.550181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.550333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.550356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.550447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.550469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.550684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.550706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.550879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.550901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.550982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.551004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.551085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.551107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.551277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.551299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.551450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.551476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.551580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.551601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.551693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.551713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.551803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.551826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.552081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.552104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.552201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.552223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.552328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.552353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.552461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.552484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.552603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.552625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.552780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.552803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.552894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.552914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.553152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.553175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.553293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.553341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.553525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.553547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.553647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.553671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.553848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.553872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.554117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.554139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.554304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.554334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.652 [2024-12-14 03:18:06.554443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.652 [2024-12-14 03:18:06.554466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.652 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.554705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.554728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.554825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.554849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.554936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.554957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.555108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.555130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.555238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.555259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.555428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.555451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.555549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.555572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.555782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.555805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.555901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.555925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.556153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.556178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.556349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.556372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.556484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.556507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.556654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.556675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.556831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.556853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.556941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.556964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.557074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.557096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.557194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.557217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.557399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.557422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.557541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.557564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.557648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.557669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.557768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.557791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.557961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.557984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.558176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.558199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.558379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.558403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.558503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.558525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.558627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.558649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.558806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.558829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.559021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.559043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.559126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.559148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.559422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.559495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.559729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.559766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.560032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.560065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.560188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.560220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.560484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.560519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.560690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.560723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.560966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.560991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.561102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.561125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.561298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.561348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.653 qpair failed and we were unable to recover it. 00:36:51.653 [2024-12-14 03:18:06.561451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.653 [2024-12-14 03:18:06.561473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.561697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.561720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.561820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.561843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.561990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.562013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.562171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.562193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.562354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.562377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.562658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.562681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.562791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.562815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.562975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.562997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.563090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.563114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.563207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.563229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.563395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.563418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.563518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.563541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.563633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.563656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.563764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.563787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.563887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.563910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.564006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.564028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.564117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.564142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.564328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.564351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.564525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.564547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.564722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.564744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.564835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.564859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.564962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.564983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.565213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.565236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.565352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.565376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.565475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.565498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.565598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.565620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.565775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.565809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.565985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.566008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.566209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.566231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.566337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.566358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.566445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.566467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.566640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.566665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.566777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.566800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.654 qpair failed and we were unable to recover it. 00:36:51.654 [2024-12-14 03:18:06.566966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.654 [2024-12-14 03:18:06.566989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.567206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.567228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.567328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.567351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.567504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.567526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.567693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.567721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.567897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.567922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.568029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.568055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.568149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.568173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.568290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.568320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.568546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.568569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.568684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.568709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.568805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.568827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.568941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.568964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.569076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.569099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.569196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.569219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.569349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.569373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.569466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.569489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.569587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.569610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.569724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.569746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.569843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.569865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.570018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.570041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.570209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.570232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.570410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.570433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.570599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.570622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.570772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.570795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.570907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.570930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.571091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.571112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.571280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.571307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.571477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.571499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.571611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.571634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.571723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.571746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.571903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.571930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.572018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.572039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.572194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.572218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.572403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.572426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.572575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.572598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.572692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.572715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.572900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.572922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.573030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.573052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.655 [2024-12-14 03:18:06.573148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.655 [2024-12-14 03:18:06.573170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.655 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.573279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.573301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.573426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.573450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.573597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.573620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.573710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.573733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.573841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.573865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.574017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.574039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.574119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.574139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.574290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.574320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.574492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.574513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.574594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.574615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.574774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.574797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.574965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.574987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.575157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.575180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.575401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.575425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.575528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.575550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.575647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.575670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.575819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.575843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.575946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.575969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.576048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.576070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.576245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.576267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.576365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.576386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.576493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.576515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.576615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.576640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.576754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.576777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.576878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.576900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.576998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.577021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.577102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.577123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.577290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.577321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.577423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.577446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.577598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.577621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.577705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.577727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.577880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.577902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.577989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.578013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.578101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.578123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.578273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.578295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.578466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.578490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.578671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.656 [2024-12-14 03:18:06.578693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.656 qpair failed and we were unable to recover it. 00:36:51.656 [2024-12-14 03:18:06.578783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.578805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.578909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.578932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.579035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.579058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.579156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.579179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.579354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.579377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.579477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.579500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.579713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.579736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.579909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.579931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.580034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.580056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.580150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.580172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.580390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.580414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.580593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.580616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.580768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.580790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.580940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.580962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.581052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.581075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.581175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.581197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.581411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.581434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.581589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.581612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.581781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.581804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.581949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.581972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.582060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.582083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.582164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.582187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.582269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.582295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.582464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.582487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.582585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.582608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.582702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.582724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.582891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.582913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.583017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.583040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.583263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.583285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.583386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.583410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.583505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.583528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.583725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.583748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.583845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.583866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.583979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.584002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.584151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.584174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.584346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.584370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.584614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.584637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.584793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.584816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.584919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.657 [2024-12-14 03:18:06.584941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.657 qpair failed and we were unable to recover it. 00:36:51.657 [2024-12-14 03:18:06.585037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.585060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.585228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.585250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.585343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.585365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.585549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.585571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.585656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.585679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.585786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.585808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.585906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.585929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.586022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.586046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.586158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.586180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.586356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.586379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.586678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.586710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.586842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.586866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.586948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.586970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.587053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.587076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.587155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.587178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.587259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.587282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.587474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.587497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.587740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.587762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.587863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.587886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.587986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.588008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.588166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.588188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.588306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.588351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.588478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.588509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.588632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.588665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.588892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.588926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.589118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.589150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.589277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.589324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.589445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.589468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.589555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.589578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.589677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.589699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.589855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.589878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.590034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.590056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.590237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.590261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.590438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.658 [2024-12-14 03:18:06.590462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.658 qpair failed and we were unable to recover it. 00:36:51.658 [2024-12-14 03:18:06.590550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.590573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.590680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.590704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.590812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.590834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.590919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.590947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.591054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.591076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.591257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.591280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.591381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.591405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.591504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.591536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.591655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.591688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.591795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.591828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.591930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.591964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.592082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.592114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.592225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.592257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.592373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.592399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.592631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.592663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.592782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.592814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.592917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.592949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.593065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.593098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.593208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.593230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.593346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.593370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.593542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.593575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.593782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.593815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.594004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.594037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.594283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.594324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.594439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.594476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.594763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.594798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.594979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.595012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.595234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.595267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.595486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.595522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.595699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.595731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.595856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.595888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.596025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.596059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.596253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.596274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.596492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.596515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.596615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.596638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.596878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.596901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.597118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.597140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.597227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.597249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.597340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.659 [2024-12-14 03:18:06.597363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.659 qpair failed and we were unable to recover it. 00:36:51.659 [2024-12-14 03:18:06.597449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.597472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.597639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.597671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.597925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.597959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.598082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.598114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.598227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.598250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.598478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.598502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.598588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.598610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.598707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.598729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.598883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.598906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.599013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.599035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.599200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.599232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.599364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.599397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.599586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.599620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.599791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.599823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.600028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.600061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.600174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.600207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.600325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.600348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.600511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.600544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.600652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.600686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.600884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.600917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.601124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.601157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.601282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.601306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.601432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.601454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.601635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.601668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.601859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.601891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.602128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.602162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.602331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.602355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.602504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.602527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.602694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.602716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.602823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.602846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.602961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.602999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.603133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.603166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.603451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.603491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.603709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.603732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.603839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.603872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.604121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.604153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.604380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.604406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.604588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.604613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.604835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.660 [2024-12-14 03:18:06.604858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.660 qpair failed and we were unable to recover it. 00:36:51.660 [2024-12-14 03:18:06.605092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.605116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.605250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.605273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.605466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.605490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.605596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.605618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.605837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.605860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.605964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.605986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.606094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.606125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.606337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.606376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.606618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.606652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.606839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.606873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.607058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.607091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.607209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.607242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.607363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.607397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.607665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.607698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.607828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.607862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.607981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.608015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.608187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.608210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.608308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.608337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.608456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.608479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.608647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.608671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.608757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.608783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.609083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.609116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.609362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.609396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.609635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.609658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.609742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.609764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.609888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.609912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.610057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.610080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.610328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.610364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.610603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.610637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.610832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.610864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.611068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.611102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.611321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.611355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.611550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.611573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.611694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.611717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.611946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.611969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.612137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.612160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.612327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.612351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.612473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.612496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.612666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.661 [2024-12-14 03:18:06.612689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.661 qpair failed and we were unable to recover it. 00:36:51.661 [2024-12-14 03:18:06.612802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.612824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.613045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.613077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.613223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.613256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.613398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.613432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.613617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.613640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.613736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.613758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.613940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.613963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.614052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.614073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.614255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.614278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.614450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.614475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.614639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.614662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.614813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.614866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.615132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.615166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.615303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.615359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.615601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.615623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.615735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.615759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.615949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.615988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.616160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.616192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.616438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.616473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.616663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.616686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.616808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.616847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.617038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.617070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.617262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.617297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.617441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.617473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.617606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.617638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.617830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.617862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.618144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.618177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.618440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.618474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.618583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.618605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.618830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.618862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.619174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.619207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.619413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.619437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.619545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.619566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.619738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.619770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.619979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.620013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.620310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.620354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.620552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.620586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.620696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.620729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.620872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.620906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.662 qpair failed and we were unable to recover it. 00:36:51.662 [2024-12-14 03:18:06.621046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.662 [2024-12-14 03:18:06.621078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.621210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.621244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.621444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.621478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.621732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.621768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.621905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.621938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.622178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.622212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.622468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.622512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.622676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.622698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.622799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.622832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.622942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.622976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.623190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.623228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.623423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.623448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.623616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.623648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.623788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.623823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.624062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.624094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.624290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.624319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.624428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.624468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.624589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.624621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.624792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.624824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.625039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.625072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.625261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.625292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.625479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.625513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.625653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.625676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.625784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.625806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.625993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.626016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.626257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.626281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.626392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.626415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.626527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.626551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.626719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.626752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.626886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.626921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.627092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.627125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.627301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.627331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.627449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.627471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.627588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.627611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.663 [2024-12-14 03:18:06.627706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.663 [2024-12-14 03:18:06.627727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.663 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.627899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.627922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.628174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.628207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.628384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.628423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.628608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.628641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.628768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.628801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.629088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.629121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.629391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.629426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.629642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.629675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.629903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.629935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.630118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.630151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.630342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.630366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.630530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.630553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.630656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.630679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.630805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.630827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.630947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.630983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.631159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.631194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.631449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.631484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.631648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.631671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.631775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.631796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.631991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.632015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.632169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.632192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.632350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.632375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.632607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.632631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.632734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.632757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.632881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.632904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.633138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.633161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.633285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.633308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.633515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.633539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.633639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.633680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.633786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.633825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.634026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.634058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.634297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.634327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.634436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.634460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.634551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.634572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.634686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.634709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.634880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.634904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.635154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.635176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.635336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.635359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.635471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.664 [2024-12-14 03:18:06.635493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.664 qpair failed and we were unable to recover it. 00:36:51.664 [2024-12-14 03:18:06.635714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.635738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.635901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.635924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.636188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.636221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.636397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.636421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.636510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.636531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.636627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.636650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.636749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.636771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.636947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.636972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.637130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.637153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.637362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.637397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.637533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.637565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.637685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.637717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.637847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.637880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.638079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.638113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.638235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.638267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.638459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.638493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.638687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.638719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.638857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.638890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.639155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.639188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.639357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.639392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.639676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.639710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.639900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.639933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.640175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.640208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.640455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.640489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.640611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.640653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.640809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.640832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.641049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.641082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.641366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.641400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.641597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.641629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.641766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.641799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.641931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.641964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.642302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.642391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.642605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.642642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.642852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.642886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.643042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.643076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.643343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.643377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.643580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.643613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.643744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.643777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.643905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.665 [2024-12-14 03:18:06.643937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.665 qpair failed and we were unable to recover it. 00:36:51.665 [2024-12-14 03:18:06.644134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.644167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.644360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.644393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.644538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.644571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.644706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.644738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.644928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.644960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.645253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.645287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.645519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.645553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.645697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.645730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.645917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.645950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.646257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.646289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.646425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.646452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.646571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.646596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.646706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.646729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.646827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.646850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.647162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.647193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.647299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.647336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.647524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.647548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.647664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.647689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.647784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.647807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.647983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.648008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.648108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.648129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.648356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.648379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.648506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.648529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.648636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.648658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.648833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.648857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.649061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.649093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.649305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.649369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.649572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.649604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.649844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.649878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.650080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.650113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.650301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.650332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.650491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.650524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.650717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.650749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.650950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.650983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.651280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.651323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.651510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.651544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.651722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.651744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.651997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.666 [2024-12-14 03:18:06.652019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.666 qpair failed and we were unable to recover it. 00:36:51.666 [2024-12-14 03:18:06.652260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.652283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.652417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.652440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.652558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.652582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.652754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.652777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.652951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.652984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.653213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.653246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.653389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.653422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.653531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.653564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.653698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.653736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.653947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.653979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.654241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.654274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.654430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.654465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.654592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.654625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.654857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.654889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.655066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.655099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.655293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.655336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.655473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.655507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.655703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.655729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.655938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.655981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.656189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.656221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.656480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.656514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.656633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.656656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.656787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.656811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.657105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.657128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.657377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.657402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.657520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.657544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.657717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.657750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.658027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.658061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.658171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.658202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.658444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.658478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.658657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.658681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.658858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.658890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.659181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.659215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.667 [2024-12-14 03:18:06.659411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.667 [2024-12-14 03:18:06.659435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.667 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.659663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.659696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.659817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.659854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.660075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.660108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.660417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.660451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.660657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.660692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.660884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.660917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.661135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.661168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.661362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.661397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.661588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.661617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.661820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.661852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.662117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.662154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.662420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.662455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.662609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.662632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.662741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.662762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.662879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.662905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.663191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.663224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.663401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.663436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.663633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.663657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.663848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.663871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.663960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.663982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.664231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.664264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.664414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.664449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.664734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.664767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.664971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.665005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.665253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.665286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.665508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.665546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.665689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.665723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.665980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.666013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.666222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.666257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.666399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.666433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.666688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.666721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.666869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.666903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.667209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.667242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.667438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.667473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.667752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.667778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.667882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.667903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.668062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.668085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.668329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.668 [2024-12-14 03:18:06.668354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.668 qpair failed and we were unable to recover it. 00:36:51.668 [2024-12-14 03:18:06.668528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.668551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.668701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.668724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.668896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.668920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.669030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.669054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.669306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.669342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.669521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.669545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.669719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.669743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.669896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.669918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.670161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.670184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.670413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.670438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.670660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.670684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.670864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.670886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.671008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.671031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.671145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.671170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.671450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.671474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.671722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.671746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.671860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.671884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.672128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.672152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.672256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.672277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.672429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.672453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.672618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.672641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.672842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.672866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.673123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.673147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.673322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.673346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.673534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.673558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.673731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.673754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.673929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.673952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.674143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.674166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.674328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.674353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.674541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.674564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.674737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.674760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.674944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.674967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.675190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.675213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.675392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.675418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.675533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.675554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.675754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.675778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.676039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.676062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.676181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.676204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.676385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.669 [2024-12-14 03:18:06.676408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.669 qpair failed and we were unable to recover it. 00:36:51.669 [2024-12-14 03:18:06.676523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.676547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.676723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.676747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.676871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.676894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.677140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.677174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.677361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.677386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.677554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.677593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.677771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.677804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.678068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.678101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.678285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.678329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.678600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.678633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.678814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.678840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.679017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.679040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.679199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.679223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.679386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.679411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.679567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.679590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.679694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.679717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.679956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.679990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.680256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.680290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.680499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.680534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.680691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.680725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.680981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.681015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.681219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.681252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.681464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.681498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.681704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.681727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.681835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.681865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.682114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.682148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.682271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.682304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.682586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.682621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.682813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.682837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.683117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.683139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.683235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.683257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.683472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.683506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.683719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.683756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.683980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.684014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.684207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.684238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.684509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.684550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.684714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.684737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.684918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.684950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.685129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.685162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.670 [2024-12-14 03:18:06.685346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.670 [2024-12-14 03:18:06.685380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.670 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.685567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.685591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.685769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.685802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.686113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.686149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.686393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.686430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.686606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.686629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.686739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.686762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.686924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.686948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.687064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.687096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.687233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.687267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.687596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.687631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.687892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.687926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.688194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.688228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.688420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.688454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.688591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.688615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.688770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.688792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.688969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.688993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.689157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.689180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.689354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.689396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.689598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.689630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.689890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.689928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.690150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.690183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.690417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.690451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.690645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.690678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.690860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.690883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.691132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.691156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.691418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.691442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.691621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.691644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.691871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.691905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.692123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.692156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.692336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.692371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.692552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.692585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.692737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.692771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.693055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.693089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.693303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.693367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.693571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.693604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.693796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.693820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.693937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.693959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.694125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.694158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.694428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.671 [2024-12-14 03:18:06.694462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.671 qpair failed and we were unable to recover it. 00:36:51.671 [2024-12-14 03:18:06.694714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.694746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.694970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.695004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.695277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.695310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.695498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.695531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.695658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.695692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.695868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.695901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.696010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.696044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.696324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.696359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.696504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.696528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.696623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.696645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.696748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.696771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.696971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.697006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.697245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.697278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.697487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.697524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.697702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.697735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.697934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.697968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.698240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.698274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.698488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.698522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.698766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.698790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.698909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.698932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.699161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.699185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.699436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.699463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.699637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.699660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.699854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.699887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.700092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.700127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.700424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.700459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.700639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.700672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.700908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.700943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.701132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.701165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.701455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.701489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.701785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.701818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.702060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.702093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.702371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.702407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.702606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.702639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.672 [2024-12-14 03:18:06.702892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.672 [2024-12-14 03:18:06.702916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.672 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.703034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.703058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.703156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.703178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.703450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.703476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.703682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.703706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.703907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.703930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.704123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.704148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.704441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.704465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.704643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.704667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.704877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.704913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.705094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.705127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.705322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.705358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.705554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.705577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.705748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.705781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.706005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.706045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.706339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.706376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.706511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.706535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.706653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.706695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.706954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.706987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.707259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.707294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.707582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.707617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.707765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.707797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.708074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.708109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.708300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.708355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.708507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.708531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.708701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.708725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.708943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.708976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.709222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.709255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.709411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.709447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.709641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.709664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.709840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.709864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.710056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.710090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.710297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.710345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.710546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.710581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.710843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.710867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.710990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.711014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.711215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.711239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.711337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.711360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.711537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.711563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.673 [2024-12-14 03:18:06.711698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.673 [2024-12-14 03:18:06.711731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.673 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.711866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.711901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.712120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.712159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.712440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.712474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.712679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.712713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.712898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.712923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.713173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.713205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.713388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.713422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.713625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.713659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.713880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.713914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.714040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.714075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.714283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.714327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.714554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.714579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.714690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.714715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.714920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.714945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.715116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.715141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.715401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.715437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.715629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.715666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.715854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.715878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.716074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.716098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.716277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.716301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.716549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.716573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.716753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.716778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.717033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.717065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.717251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.717286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.717556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.717592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.717856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.717890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.718080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.718104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.718286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.718330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.718581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.718615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.718819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.718854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.719125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.719159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.719447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.719482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.719750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.719775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.720099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.720133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.720363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.720397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.720515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.720550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.720700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.720724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.720934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.720968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.721171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.674 [2024-12-14 03:18:06.721206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.674 qpair failed and we were unable to recover it. 00:36:51.674 [2024-12-14 03:18:06.721335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.721369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.721495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.721520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.721691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.721725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.721932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.722025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.722246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.722283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.722551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.722586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.722888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.722921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.723182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.723216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.723412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.723446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.723648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.723681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.723945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.723980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.724186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.724219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.724468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.724496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.724683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.724717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.724922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.724955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.725142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.725176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.725436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.725471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.725751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.725797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.726065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.726089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.726369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.726394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.726584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.726608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.726796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.726819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.727048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.727072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.727184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.727208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.727376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.727401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.727590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.727614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.727775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.727800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.727985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.728009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.728171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.728195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.728396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.728423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.728591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.728630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.728815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.728850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.729104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.729138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.729343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.729378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.729512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.729546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.729816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.729849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.730085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.730119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.675 qpair failed and we were unable to recover it. 00:36:51.675 [2024-12-14 03:18:06.730322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.675 [2024-12-14 03:18:06.730357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.676 [2024-12-14 03:18:06.730557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.676 [2024-12-14 03:18:06.730591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.676 qpair failed and we were unable to recover it. 00:36:51.965 [2024-12-14 03:18:06.730868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-12-14 03:18:06.730896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-12-14 03:18:06.731012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-12-14 03:18:06.731034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-12-14 03:18:06.731156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-12-14 03:18:06.731181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-12-14 03:18:06.731356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-12-14 03:18:06.731381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-12-14 03:18:06.731497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-12-14 03:18:06.731526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.965 qpair failed and we were unable to recover it. 00:36:51.965 [2024-12-14 03:18:06.731707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.965 [2024-12-14 03:18:06.731731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.731889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.731914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.732077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.732102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.732196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.732218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.732400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.732425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.732590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.732616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.732789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.732813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.733063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.733089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.733250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.733274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.733466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.733491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.733673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.733697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.733888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.733912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.734082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.734107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.734360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.734398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.734609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.734646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.734828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.734863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.735051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.735081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.735339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.735365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.735527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.735552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.735804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.735828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.736022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.736047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.736162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.736186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.736358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.736383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.736567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.736591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.736708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.736733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.736901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.736925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.737027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.737049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.737242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.737267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.737361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.737385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.737567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.737592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.737705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.737730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.737950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.737975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.738213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.738239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.966 qpair failed and we were unable to recover it. 00:36:51.966 [2024-12-14 03:18:06.738425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.966 [2024-12-14 03:18:06.738450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.738708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.738732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.738906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.738931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.739110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.739134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.739225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.739247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.739431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.739458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.739655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.739689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.739895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.739934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.740076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.740112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.740324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.740360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.740617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.740653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.740913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.740948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.741144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.741178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.741460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.741495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.741691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.741726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.741861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.741896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.742180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.742215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.742423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.742458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.742662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.742697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.742914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.742948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.743066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.743111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.743321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.743356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.743628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.743662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.743791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.743824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.744005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.744037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.744286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.744334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.744545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.744578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.744771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.744804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.744997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.745031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.745246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.745280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.745528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.745563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.745677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.745707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.745919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.745952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.746094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.746127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.746333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.746368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.746502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.746534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.746750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.746785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.747060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.967 [2024-12-14 03:18:06.747093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.967 qpair failed and we were unable to recover it. 00:36:51.967 [2024-12-14 03:18:06.747302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.747348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.747540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.747575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.747702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.747731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.747910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.747944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.748080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.748114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.748249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.748282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.748524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.748558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.748855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.748888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.749153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.749187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.749385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.749426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.749622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.749646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.749874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.749898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.750076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.750100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.750295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.750330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.750557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.750581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.750854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.750878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.751145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.751178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.751483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.751519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.751776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.751809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.752030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.752064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.752246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.752280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.752564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.752599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.752859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.752893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.753125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.753158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.753409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.753443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.753699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.753723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.753967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.753991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.754148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.754171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.754363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.754388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.754509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.754533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.754698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.754722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.754978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.755012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.755206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.755241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.755489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.755525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.755773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.755806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.756067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.756091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.756253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.756292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.756502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.756535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.968 [2024-12-14 03:18:06.756785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.968 [2024-12-14 03:18:06.756818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.968 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.757044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.757077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.757356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.757393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.757578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.757611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.757760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.757784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.758033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.758056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.758281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.758306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.758561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.758586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.758825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.758849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.759093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.759117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.759359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.759384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.759579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.759602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.759834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.759868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.760116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.760150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.760366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.760401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.760706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.760742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.761000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.761034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.761344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.761380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.761655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.761690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.761959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.761993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.762259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.762291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.762608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.762642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.762915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.762950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.763090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.763122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.763390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.763427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.763633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.763657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.763908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.763932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.764161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.764185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.764428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.764453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.764621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.764644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.764912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.764946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.765130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.765163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.765361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.765395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.765606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.765633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.765801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.765825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.766008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.766041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.766220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.766253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.766498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.766521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.969 [2024-12-14 03:18:06.766759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.969 [2024-12-14 03:18:06.766783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.969 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.767021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.767055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.767335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.767372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.767577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.767609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.767732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.767766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.768029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.768054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.768222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.768247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.768427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.768462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.768766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.768799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.769073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.769106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.769338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.769373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.769627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.769661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.769912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.769946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.770133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.770167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.770463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.770498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.770704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.770728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.770852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.770876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.771105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.771130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.771306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.771351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.771648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.771674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.771837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.771861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.772063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.772097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.772359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.772394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.772643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.772676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.772947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.772971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.773153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.773178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.773362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.773387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.773562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.773587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.773795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.773827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.774070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.774094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.774332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.774357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.774632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.970 [2024-12-14 03:18:06.774666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.970 qpair failed and we were unable to recover it. 00:36:51.970 [2024-12-14 03:18:06.774869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.774903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.775183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.775217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.775502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.775537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.775816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.775862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.776032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.776056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.776220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.776253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.776551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.776586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.776792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.776826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.777126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.777168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.777449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.777485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.777683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.777718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.777975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.778000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.778166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.778190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.778437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.778473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.778669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.778703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.778852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.778876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.779147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.779170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.779412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.779447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.779667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.779702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.779825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.779855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.780169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.780194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.780375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.780400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.780582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.780605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.780786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.780814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.781082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.781106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.781284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.781308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.781441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.781466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.781583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.781606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.781768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.781792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.782042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.782067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.782260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.782285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.782530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.782556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.782717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.782742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.782944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.782967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.783179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.783204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.783461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.783486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.783587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.783609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.783786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.783811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.784112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.784136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.971 [2024-12-14 03:18:06.784366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.971 [2024-12-14 03:18:06.784391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.971 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.784580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.784604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.784873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.784899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.785060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.785084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.785275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.785298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.785582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.785608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.785857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.785882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.786181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.786205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.786381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.786405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.786588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.786612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.786791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.786815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.787109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.787138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.787254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.787277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.787534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.787559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.787762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.787785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.788014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.788039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.788292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.788325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.788536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.788561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.788696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.788721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.788824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.788846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.789119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.789143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.789451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.789476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.789740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.789764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.790006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.790030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.790306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.790338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.790468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.790493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.790618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.790642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.790763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.790787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.790930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.790954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.791131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.791156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.791340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.791366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.791572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.791597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.791796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.791822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.792102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.792129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.792291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.792326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.792509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.792534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.792719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.792743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.792922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.792947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.793046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.793068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.972 [2024-12-14 03:18:06.793259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.972 [2024-12-14 03:18:06.793283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.972 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.793463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.793489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.793749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.793773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.793982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.794007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.794264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.794288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.794416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.794439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.794641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.794665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.794884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.794907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.795160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.795186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.795421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.795447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.795572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.795597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.795732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.795757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.795957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.795980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.796197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.796222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.796489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.796514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.796698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.796724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.796841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.796863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.797064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.797092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.797223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.797246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.797465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.797490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.797669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.797692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.797873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.797896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.798192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.798216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.798380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.798404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.798661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.798686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.798881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.798905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.799183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.799209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.799417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.799443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.799576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.799601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.799843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.799869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.800052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.800076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.800209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.800234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.800413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.800438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.800687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.800711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.800811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.800834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.801078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.801103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.801342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.801366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.801461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.801484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.801673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.801697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.973 [2024-12-14 03:18:06.801823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.973 [2024-12-14 03:18:06.801847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.973 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.802085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.802115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.802249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.802274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.802506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.802531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.802720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.802745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.802886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.802910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.803150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.803174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.803452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.803479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.803683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.803709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.803875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.803899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.804088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.804113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.804352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.804376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.804631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.804656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.804821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.804846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.805113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.805138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.805253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.805277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.805446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.805472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.805666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.805691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.805798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.805820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.806095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.806121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.806360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.806391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.806579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.806604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.806837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.806864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.807214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.807238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.807407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.807433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.807576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.807600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.807710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.807732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.807985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.808010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.808193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.808223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.808391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.808417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.808528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.808552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.808679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.808702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.808886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.808910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.809089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.809113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.809367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.809392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.809500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.809523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.809629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.809653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.809794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.809819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.810007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.810031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.810196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.974 [2024-12-14 03:18:06.810221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.974 qpair failed and we were unable to recover it. 00:36:51.974 [2024-12-14 03:18:06.810341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.810366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.810549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.810575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.810750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.810775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.811009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.811034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.811159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.811181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.811370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.811395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.811582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.811607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.811794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.811818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.812047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.812072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.812177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.812199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.812360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.812386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.812572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.812597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.812697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.812719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.812999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.813024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.813205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.813228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.813388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.813413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.813603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.813628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.813764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.813788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.814033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.814057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.814225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.814249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.814453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.814479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.814660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.814685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.814879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.814905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.815234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.815259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.815404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.815428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.815587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.815611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.815745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.815771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.815989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.816014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.816278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.816301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.816418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.816443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.816635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.816659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.816830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.816854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.817041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.817065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.817335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.817360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.975 qpair failed and we were unable to recover it. 00:36:51.975 [2024-12-14 03:18:06.817488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.975 [2024-12-14 03:18:06.817515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.817635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.817659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.817799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.817823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.818028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.818054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.818233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.818257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.818358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.818381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.818630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.818655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.818765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.818790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.819091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.819115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.819380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.819407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.819521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.819546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.819679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.819704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.819889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.819913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.820095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.820120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.820255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.820280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.820403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.820428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.820544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.820569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.820679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.820703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.820878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.820902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.821065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.821089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.821201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.821226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.821339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.821364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.821480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.821509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.821673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.821697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.821872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.821896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.822075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.822101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.822210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.822235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.822351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.822377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.822474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.822496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.822593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.822616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.822724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.822750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.822841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.822862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.822960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.822982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.823096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.823120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.823228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.823253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.823364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.823390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.823503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.823527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.823636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.823661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.823784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.823808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.976 [2024-12-14 03:18:06.823895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.976 [2024-12-14 03:18:06.823917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.976 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.824038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.824064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.824167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.824192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.824293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.824327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.824435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.824460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.824627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.824651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.824821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.824846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.824935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.824958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.825065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.825091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.825220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.825245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.825374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.825404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.825569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.825594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.825755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.825780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.825890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.825919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.826023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.826048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.826175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.826199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.826292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.826324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.826432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.826456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.826552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.826574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.826675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.826699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.826790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.826814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.826918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.826940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.827060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.827084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.827174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.827196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.827311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.827364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.827450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.827472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.827573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.827598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.827702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.827728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.827822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.827845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.827960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.827985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.828074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.828098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.828213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.828238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.828337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.828361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.828558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.828583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.828688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.828710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.828882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.828907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.829085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.829111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.829211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.829240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.829346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.829374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.829492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.977 [2024-12-14 03:18:06.829516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.977 qpair failed and we were unable to recover it. 00:36:51.977 [2024-12-14 03:18:06.829617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.829642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.829763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.829786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.829947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.829971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.830226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.830250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.830427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.830452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.830615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.830641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.830742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.830766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.830872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.830899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.830988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.831012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.831126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.831151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.831268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.831293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.831473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.831498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.831666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.831693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.831878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.831904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.832004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.832028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.832194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.832220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.832331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.832354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.832533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.832558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.832724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.832750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.832841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.832866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.832978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.833003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.833097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.833121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.833362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.833388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.833503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.833528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.833632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.833656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.833755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.833779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.834020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.834046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.834150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.834175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.834296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.834331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.834434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.834457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.834547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.834572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.834678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.834702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.834791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.834815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.835024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.835049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.835150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.835173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.835349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.835375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.835481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.835507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.835606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.835630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.835797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.978 [2024-12-14 03:18:06.835822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.978 qpair failed and we were unable to recover it. 00:36:51.978 [2024-12-14 03:18:06.835917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.835941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.836043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.836067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.836240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.836263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.836449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.836474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.836643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.836667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.836858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.836883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.836991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.837014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.837107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.837129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.837233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.837256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.837420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.837445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.837605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.837630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.837811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.837835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.837945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.837967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.838137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.838162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.838260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.838284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.838397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.838420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.838511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.838533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.838657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.838681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.838798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.838821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.839001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.839025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.839229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.839254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.839352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.839379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.839485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.839509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.839607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.839633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.839724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.839747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.839845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.839870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.839961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.839989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.840078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.840102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.840197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.840227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.840389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.840416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.840504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.840528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.840620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.840643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.840744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.840767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.840934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.840959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.979 [2024-12-14 03:18:06.841072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.979 [2024-12-14 03:18:06.841095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.979 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.841191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.841215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.841401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.841426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.841596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.841619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.841718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.841743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.841922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.841947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.842067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.842091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.842246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.842271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.842371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.842395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.842487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.842510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.842598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.842622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.842705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.842731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.842901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.842925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.843033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.843058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.843150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.843172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.843270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.843295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.843427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.843452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.843566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.843590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.843684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.843708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.843866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.843894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.844070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.844094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.844252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.844278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.844466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.844490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.844583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.844606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.844692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.844715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.844824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.844847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.845010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.845034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.845122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.845145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.845232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.845256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.845360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.845385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.845469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.845493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.845662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.845686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.845788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.845812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.845985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.846008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.846173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.846198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.846383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.846407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.846503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.846526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.846634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.846658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.846749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.846772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.846867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.980 [2024-12-14 03:18:06.846893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.980 qpair failed and we were unable to recover it. 00:36:51.980 [2024-12-14 03:18:06.846993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.847016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.847134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.847159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.847333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.847357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.847451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.847474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.847570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.847594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.847757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.847780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.847876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.847901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.847997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.848021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.848133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.848157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.848248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.848270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.848372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.848395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.848608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.848631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.848721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.848744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.848834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.848858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.849046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.849070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.849194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.849218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.849452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.849477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.849635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.849658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.849774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.849797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.849890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.849914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.850144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.850222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.850387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.850427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.850561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.850597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.850807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.850841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.851023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.851056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.851169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.851204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.851475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.851503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.851598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.851623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.851710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.851733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.851896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.851920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.852090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.852114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.852288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.852321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.852420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.852444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.852603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.852627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.981 [2024-12-14 03:18:06.852726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.981 [2024-12-14 03:18:06.852750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.981 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.852931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.852956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.853077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.853099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.853189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.853212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.853371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.853395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.853485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.853507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.853668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.853692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.853815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.853839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.853923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.853947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.854108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.854132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.854294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.854324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.854553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.854578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.854678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.854702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.854902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.854941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.855079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.855111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.855308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.855353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.855465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.855499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.855629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.855662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.855772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.855806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.855980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.856006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.856110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.856133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.856224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.856246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.856356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.856379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.856469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.856493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.856582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.856604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.856834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.856859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.857032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.857057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.857141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.857165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.857256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.857279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.857373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.857396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.982 [2024-12-14 03:18:06.857501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.982 [2024-12-14 03:18:06.857525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.982 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.857633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.857655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.857769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.857793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.857957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.857979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.858084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.858108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.858331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.858357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.858455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.858478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.858596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.858620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.858873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.858897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.858987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.859011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.859133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.859170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.859287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.859334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.859453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.859487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.859605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.859637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.859746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.859780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.859963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.859997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.860170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.860196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.860290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.860322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.860484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.860507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.860592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.860617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.860788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.860811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.860995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.861017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.861175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.861199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.861290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.861320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.861416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.861440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.861529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.861554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.861641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.861666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.861753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.861775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.861947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.861972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.862071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.862094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.862334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.862358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.862462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.862485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.862657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.862681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.862840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.862863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.983 [2024-12-14 03:18:06.863029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.983 [2024-12-14 03:18:06.863053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.983 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.863137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.863180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.863363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.863386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.863542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.863569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.863673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.863697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.863782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.863805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.863969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.863993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.864100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.864124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.864282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.864305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.864488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.864512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.864626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.864649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.864805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.864830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.864991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.865015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.865172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.865196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.865322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.865346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.865507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.865532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.865622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.865646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.865847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.865873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.865989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.866014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.866248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.866272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.866404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.866429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.866658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.866683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.866803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.866826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.866983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.867006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.867107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.867131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.867293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.867327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.867515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.867540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.867714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.867738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.867899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.984 [2024-12-14 03:18:06.867923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.984 qpair failed and we were unable to recover it. 00:36:51.984 [2024-12-14 03:18:06.868023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.868047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.868138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.868166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.868329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.868355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.868445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.868468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.868628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.868652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.868758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.868782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.869029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.869054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.869156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.869180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.869350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.869375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.869546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.869571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.869678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.869702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.869816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.869840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.869940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.869964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.870055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.870079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.870182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.870206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.870307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.870341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.870513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.870536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.870620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.870643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.870755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.870779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.870957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.870981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.871085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.871109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.871306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.871338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.871434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.871460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.871560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.871584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.871763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.871787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.871953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.871978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.872073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.872097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.985 [2024-12-14 03:18:06.872253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.985 [2024-12-14 03:18:06.872277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.985 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.872476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.872505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.872683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.872706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.872850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.872874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.872967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.872992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.873083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.873106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.873286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.873310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.873426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.873449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.873624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.873649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.873744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.873767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.873938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.873961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.874212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.874236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.874410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.874435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.874541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.874564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.874661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.874685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.874796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.874821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.875067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.875090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.875177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.875202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.875365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.875442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.875687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.875724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.875911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.875944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.876155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.876188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.876387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.876422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.876604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.876637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.876849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.876883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.876995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.877027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.877140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.877166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.877416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.877440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.877613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.877640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.877738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.986 [2024-12-14 03:18:06.877761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.986 qpair failed and we were unable to recover it. 00:36:51.986 [2024-12-14 03:18:06.877916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.877939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.878207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.878232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.878337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.878360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.878515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.878539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.878648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.878669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.878834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.878857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.879017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.879039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.879132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.879155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.879270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.879293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.879402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.879426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.879517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.879540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.879658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.879681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.879865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.879889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.880054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.880078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.880189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.880212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.880394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.880419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.880524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.880545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.880707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.880729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.880823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.880848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.881041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.881065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.881300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.881332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.881426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.881448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.881553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.881574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.881797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.881820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.882013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.882036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.882125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.882146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.882325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.882349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.882527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.882552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.987 [2024-12-14 03:18:06.882638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.987 [2024-12-14 03:18:06.882659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.987 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.882745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.882766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.882934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.882958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.883043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.883067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.883336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.883360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.883481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.883504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.883614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.883634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.883729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.883751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.883835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.883857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.884029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.884053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.884143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.884164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.884402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.884475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.884730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.884803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.885000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.885036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.885162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.885195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.885334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.885368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.885500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.885532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.885644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.885677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.885890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.885923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.886099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.886131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.886256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.886282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.886385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.886409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.886506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.886528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.886628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.886650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.886736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.886760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.886934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.886958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.887064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.887086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.887275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.887298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.887405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.988 [2024-12-14 03:18:06.887428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.988 qpair failed and we were unable to recover it. 00:36:51.988 [2024-12-14 03:18:06.887516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.887538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.887624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.887647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.887871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.887895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.888068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.888091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.888267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.888290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.888551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.888622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.888771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.888807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.889053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.889085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.889336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.889371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.889562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.889595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.889874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.889906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.890034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.890065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.890324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.890357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.890618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.890650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.890761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.890789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.890904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.890928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.891169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.891193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.891389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.891414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.891524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.891548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.891747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.891770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.892016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.892040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.892267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.892290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.892534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.892563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.892734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.892756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.892925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.892949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.893107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.893130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.893357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.893383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.893622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.893645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.893897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.893920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.989 [2024-12-14 03:18:06.894122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.989 [2024-12-14 03:18:06.894145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.989 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.894311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-12-14 03:18:06.894341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.894563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-12-14 03:18:06.894586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.894830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-12-14 03:18:06.894853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.895040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-12-14 03:18:06.895063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.895303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-12-14 03:18:06.895336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.895556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-12-14 03:18:06.895579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.895824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-12-14 03:18:06.895849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.896111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-12-14 03:18:06.896134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.896353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-12-14 03:18:06.896378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.896562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-12-14 03:18:06.896586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.896802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-12-14 03:18:06.896826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.897020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-12-14 03:18:06.897043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.897284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-12-14 03:18:06.897308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.897510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-12-14 03:18:06.897534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.897644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-12-14 03:18:06.897665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.897909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-12-14 03:18:06.897932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.898181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-12-14 03:18:06.898205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.898449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-12-14 03:18:06.898474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.898715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-12-14 03:18:06.898739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.898962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-12-14 03:18:06.898985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.899105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-12-14 03:18:06.899128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.899364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-12-14 03:18:06.899388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.899560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.990 [2024-12-14 03:18:06.899584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.990 qpair failed and we were unable to recover it. 00:36:51.990 [2024-12-14 03:18:06.899830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.899853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.900076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.900098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.900268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.900292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.900584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.900607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.900829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.900852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.901022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.901045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.901209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.901232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.901459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.901483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.901662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.901685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.901870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.901892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.902083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.902111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.902292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.902324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.902558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.902582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.902738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.902761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.903005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.903027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.903201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.903242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.903492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.903517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.903682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.903704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.903949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.903973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.904145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.904169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.904414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.904438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.904602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.904625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.904798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.904821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.904938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.904962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.905191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.905215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.905369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.905393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.905576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.905600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.905778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.905802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.991 qpair failed and we were unable to recover it. 00:36:51.991 [2024-12-14 03:18:06.906094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.991 [2024-12-14 03:18:06.906118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.906338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.906362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.906584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.906608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.906787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.906810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.906986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.907009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.907231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.907254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.907387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.907411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.907677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.907700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.907876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.907901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.908147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.908174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.908354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.908379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.908559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.908581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.908835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.908858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.909052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.909075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.909343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.909367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.909535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.909558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.909746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.909770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.909897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.909918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.910097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.910120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.910293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.910324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.910571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.910594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.910693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.910714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.910824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.910846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.910966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.910990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.911221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.911245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.911465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.911490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.911655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.911678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.992 qpair failed and we were unable to recover it. 00:36:51.992 [2024-12-14 03:18:06.911917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.992 [2024-12-14 03:18:06.911941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.912186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.912209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.912383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.912407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.912518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.912543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.912786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.912811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.913067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.913091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.913178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.913201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.913370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.913394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.913681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.913705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.913994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.914021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.914264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.914287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.914398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.914420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.914593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.914616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.914844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.914867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.915065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.915089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.915270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.915294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.915476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.915499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.915685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.915709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.915899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.915924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.916042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.916064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.916357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.916381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.916636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.916660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.916763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.916786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.916981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.917005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.917215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.917239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.917501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.917526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.917722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.917746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.917972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.917995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.918220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.918243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.918436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.918461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.918654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.918677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.918878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.918903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.919018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.919044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.919157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.919180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.919359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.919384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.919551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.919575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.919689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.993 [2024-12-14 03:18:06.919710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.993 qpair failed and we were unable to recover it. 00:36:51.993 [2024-12-14 03:18:06.919881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.919905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.920084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.920107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.920282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.920306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.920557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.920582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.920685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.920709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.920825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.920848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.920963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.920987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.921212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.921236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.921356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.921380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.921564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.921588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.921700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.921724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.921997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.922020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.922288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.922333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.922556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.922591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.922808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.922842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.922976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.923001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.923171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.923195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.923330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.923355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.923483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.923506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.923687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.923711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.923970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.923993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.924157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.924191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.924348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.924383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.924583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.924618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.924880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.924923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.925191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.925232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.925453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.925489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.925642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.925676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.925874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.925909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.926154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.926189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.926320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.926345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.926574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.926599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.926852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.926876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.994 qpair failed and we were unable to recover it. 00:36:51.994 [2024-12-14 03:18:06.927114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.994 [2024-12-14 03:18:06.927137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.927366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.927392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.927522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.927556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.927763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.927798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.927995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.928028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.928225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.928257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.928471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.928496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.928653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.928681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.928963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.928998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.929273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.929307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.929526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.929561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.929698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.929732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.930022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.930055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.930325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.930360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.930555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.930589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.930780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.930813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.931091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.931126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.931389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.931415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.931607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.931632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.931745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.931769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.931888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.931912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.932173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.932198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.932391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.932416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.932665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.932689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.932819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.932843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.933042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.933066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.933223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.933248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.933541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.933584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.933719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.933754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.933953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.933987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.934256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.934291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.934427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.934462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.934613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.934646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.934852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.934886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.935156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.935186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.995 qpair failed and we were unable to recover it. 00:36:51.995 [2024-12-14 03:18:06.935376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.995 [2024-12-14 03:18:06.935401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.935581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.935607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.935786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.935811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.936032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.936055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.936234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.936260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.936471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.936495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.936677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.936702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.936867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.936890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.937221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.937256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.937554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.937590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.937798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.937833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.938043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.938068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.938227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.938251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.938413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.938439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.938638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.938663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.938827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.938852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.939034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.939069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.939268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.939303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.939522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.939557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.939748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.939782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.939977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.940012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.940135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.940170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.940362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.940399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.940606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.940641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.940823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.940866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.940965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.940988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.941166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.941199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.941465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.941500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.941679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.941712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.941919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.941953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.942153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.942197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.942478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.942514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.996 qpair failed and we were unable to recover it. 00:36:51.996 [2024-12-14 03:18:06.942706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.996 [2024-12-14 03:18:06.942741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.942930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.942964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.943164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.943197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.943307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.943354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.943540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.943575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.943700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.943735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.943850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.943885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.944035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.944069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.944333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.944414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.944625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.944662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.944788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.944823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.944940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.944975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.945223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.945258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.945406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.945441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.945544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.945573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.945747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.945772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.945902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.945929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.946025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.946046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.946148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.946171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.946266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.946288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.946527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.946551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.946710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.946735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.946913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.946938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.947108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.947143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.947268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.947304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.947450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.947485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.947693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.947728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.947849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.947882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.948020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.948044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.948273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.948298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.948484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.948508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.948669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.948692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.948784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.948806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.948932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.948965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.949164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.949198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.949460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.949498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.949703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.949738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.949939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.949974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.950256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.997 [2024-12-14 03:18:06.950291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.997 qpair failed and we were unable to recover it. 00:36:51.997 [2024-12-14 03:18:06.950484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.950520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.950652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.950687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.950910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.950943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.951188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.951223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.951435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.951470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.951616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.951651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.951875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.951908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.952178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.952212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.952360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.952396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.952519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.952564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.952841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.952874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.953146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.953185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.953393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.953430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.953648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.953682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.953936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.953970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.954217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.954242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.954363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.954390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.954504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.954527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.954693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.954717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.954854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.954888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.955068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.955109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.955384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.955410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.955637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.955671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.955800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.955835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.956031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.956066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.956279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.956303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.956404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.956446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.956642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.956676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.956857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.956898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.957117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.957155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.957352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.957387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.957514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.957550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.957742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.957770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.958019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.958094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.958339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.958380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.958663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.958698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.958883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.958927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.959132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.959166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.998 [2024-12-14 03:18:06.959380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.998 [2024-12-14 03:18:06.959415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.998 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.959615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.959648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.959795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.959828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.959949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.959983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.960120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.960146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.960332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.960367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.960549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.960582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.960717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.960751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.960937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.960970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.961088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.961112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.961298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.961344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.961458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.961492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.961690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.961724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.961908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.961942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.962051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.962084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.962283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.962328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.962564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.962597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.962712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.962746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.962963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.962997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.963194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.963227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.963353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.963389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.963587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.963621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.963734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.963767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.964018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.964051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.964167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.964201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.964395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.964435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.964704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.964738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.964878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.964911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.965090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.965124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.965270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.965302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.965513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.965546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.965728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.965761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.965956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.965995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.966103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.966136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.966374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.966399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.966640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.966672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.966805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.966839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.966959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.966992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.967190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.967214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:51.999 [2024-12-14 03:18:06.967467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.999 [2024-12-14 03:18:06.967504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:51.999 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.967693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.967726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.967847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.967880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.968083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.968117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.968418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.968452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.968590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.968623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.968758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.968792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.969020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.969052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.969237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.969272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.969476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.969510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.969689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.969722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.969836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.969870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.970080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.970112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.970228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.970268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.970499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.970533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.970659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.970693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.970869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.970901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.971098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.971131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.971427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.971462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.971649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.971681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.971871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.971904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.972040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.972073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.972275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.972307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.972510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.972547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.972793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.972825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.973090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.973123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.973401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.973436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.973580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.973614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.973799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.973833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.974101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.974133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.974311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.974354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.974628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.974661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.974870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.974903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.975149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.975182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.000 [2024-12-14 03:18:06.975372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.000 [2024-12-14 03:18:06.975405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.000 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.975539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.975573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.975760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.975793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.975994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.976027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.976213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.976246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.976501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.976558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.976753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.976827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.977096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.977133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.977392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.977429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.977638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.977671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.977848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.977881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.978068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.978102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.978288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.978338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.978538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.978573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.978694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.978728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.978911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.978945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.979222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.979256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.979405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.979450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.979676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.979701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.979787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.979809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.980028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.980074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.980252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.980284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.980475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.980551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.980782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.980818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.981019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.981052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.981252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.981285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.981415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.981449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.981711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.981744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.981948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.981982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.982179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.982211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.982488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.982521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.982654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.982687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.982869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.982901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.983098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.983132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.983414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.983448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.983670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.983704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.984000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.984032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.984227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.984261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.984457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.001 [2024-12-14 03:18:06.984493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.001 qpair failed and we were unable to recover it. 00:36:52.001 [2024-12-14 03:18:06.984736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.984769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.984950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.984982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.985272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.985304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.985504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.985537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.985728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.985762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.986067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.986099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.986307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.986354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.986550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.986589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.986855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.986888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.987092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.987126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.987376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.987410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.987595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.987628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.987836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.987869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.988139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.988172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.988367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.988401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.988683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.988715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.988906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.988939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.989206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.989240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.989437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.989471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.989650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.989683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.989900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.989934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.990185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.990219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.990348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.990382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.990625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.990658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.990874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.990908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.991169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.991201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.991464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.991499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.991693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.991727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.991990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.992022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.992286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.992327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.992474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.992507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.992727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.992760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.992979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.993013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.993285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.993326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.993614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.993647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.993907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.993940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.994076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.994110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.994407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.994484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.994775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.002 [2024-12-14 03:18:06.994815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.002 qpair failed and we were unable to recover it. 00:36:52.002 [2024-12-14 03:18:06.995091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:06.995127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:06.995400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:06.995436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:06.995640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:06.995674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:06.995871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:06.995906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:06.996040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:06.996073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:06.996356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:06.996391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:06.996588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:06.996622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:06.996831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:06.996864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:06.997000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:06.997044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:06.997296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:06.997334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:06.997507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:06.997531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:06.997778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:06.997802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:06.998048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:06.998072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:06.998193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:06.998218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:06.998393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:06.998418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:06.998590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:06.998625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:06.998811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:06.998845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:06.999035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:06.999068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:06.999319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:06.999344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:06.999524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:06.999558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:06.999803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:06.999837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:07.000033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:07.000067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:07.000294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:07.000353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:07.000585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:07.000619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:07.000910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:07.000944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:07.001072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:07.001105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:07.001372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:07.001407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:07.001608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:07.001640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:07.001816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:07.001844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:07.002109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:07.002143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:07.002354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:07.002390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:07.002531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:07.002555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:07.002737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:07.002771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:07.003007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:07.003041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:07.003326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:07.003361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:07.003638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:07.003683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:07.003882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:07.003917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:07.004108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:07.004141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:07.004282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.003 [2024-12-14 03:18:07.004325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.003 qpair failed and we were unable to recover it. 00:36:52.003 [2024-12-14 03:18:07.004527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.004561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.004745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.004779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.005029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.005062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.005353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.005388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.005601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.005635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.005881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.005916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.006210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.006234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.006395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.006420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.006662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.006698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.006892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.006926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.007214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.007252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.007384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.007418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.007552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.007587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.007859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.007891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.008006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.008041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.008308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.008353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.008537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.008571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.008841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.008874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.009004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.009037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.009242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.009270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.009467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.009502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.009705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.009739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.010007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.010040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.010332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.010375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.010633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.010668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.010920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.010954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.011171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.011206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.011399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.011434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.011619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.011654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.011923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.011957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.012223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.012256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.012551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.012587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.012856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.012891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.013110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.013133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.013379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.013413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.013591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.013626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.013897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.013931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.014166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.014190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.004 [2024-12-14 03:18:07.014422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.004 [2024-12-14 03:18:07.014447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.004 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.014622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.014656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.014834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.014869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.015078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.015113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.015296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.015342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.015542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.015576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.015848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.015881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.016071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.016099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.016271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.016331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.016537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.016571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.016766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.016800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.017074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.017109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.017288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.017328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.017541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.017566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.017687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.017721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.018010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.018045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.018327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.018362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.018577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.018612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.018863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.018898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.019200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.019234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.019514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.019550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.019752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.019787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.019969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.020004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.020280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.020304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.020472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.020496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.020660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.020693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.020944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.021023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.021266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.021304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.021587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.021624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.021892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.021925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.022106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.022140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.022281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.022326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.022607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.022640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.022905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.022937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.023052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.005 [2024-12-14 03:18:07.023085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.005 qpair failed and we were unable to recover it. 00:36:52.005 [2024-12-14 03:18:07.023375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.023410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.023633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.023667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.023846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.023879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.024009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.024059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.024249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.024283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.024518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.024554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.024752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.024787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.024969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.025003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.025141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.025166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.025274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.025298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.025544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.025569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.025823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.025848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.025974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.025998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.026176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.026200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.026370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.026396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.026653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.026689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.026967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.027001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.027335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.027361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.027485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.027521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.027774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.027808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.027989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.028022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.028269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.028298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.028531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.028555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.028785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.028810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.028982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.029006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.029247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.029270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.029530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.029556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.029736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.029759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.029948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.029969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.030089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.030111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.030215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.030237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.030398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.030421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.030609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.030632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.030892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.030914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.031018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.031040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.031321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.031346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.031601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.031625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.031795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.031819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.032054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.032078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.006 qpair failed and we were unable to recover it. 00:36:52.006 [2024-12-14 03:18:07.032332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.006 [2024-12-14 03:18:07.032358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.032593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.032617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.032790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.032814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.033040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.033065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.033176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.033198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.033411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.033435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.033673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.033696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.033873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.033895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.034145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.034170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.034417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.034441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.034606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.034631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.034864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.034888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.035163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.035187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.035401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.035426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.035609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.035634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.035815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.035839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.035926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.035948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.036109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.036133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.036321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.036347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.036576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.036600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.036862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.036887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.037150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.037174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.037406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.037432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.037604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.037628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.037820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.037843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.038073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.038096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.038268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.038292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.038472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.038497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.038661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.038686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.038870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.038894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.039087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.039110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.039360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.039384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.039603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.039626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.039860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.039889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.040063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.040086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.040325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.040350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.040519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.040543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.040731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.040755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.040918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.040941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.041193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.041218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.041394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.041418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.007 [2024-12-14 03:18:07.041662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.007 [2024-12-14 03:18:07.041685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.007 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.041930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.041954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.042214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.042240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.042523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.042547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.042787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.042811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.042921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.042943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.043056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.043080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.043262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.043287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.043532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.043557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.043805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.043829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.044073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.044098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.044284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.044308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.044498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.044523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.044622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.044643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.044899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.044924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.045175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.045199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.045362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.045388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.045559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.045583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.045817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.045840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.046119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.046148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.046377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.046402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.046631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.046656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.046819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.046841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.047093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.047117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.047237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.047258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.047510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.047535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.047698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.047722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.047955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.047978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.048232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.048255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.048425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.048450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.048617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.048640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.048735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.048758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.049000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.049024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.049282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.049307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.049600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.049625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.049753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.049777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.049949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.049973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.050079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.050100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.050280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.008 [2024-12-14 03:18:07.050305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.008 qpair failed and we were unable to recover it. 00:36:52.008 [2024-12-14 03:18:07.050428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.050450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.050619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.050642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.050805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.050830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.051022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.051046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.051215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.051239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.051497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.051521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.051715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.051739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.051977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.052000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.052233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.052256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.052433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.052457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.052723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.052749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.053029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.053054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.053218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.053242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.053344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.053368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.053615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.053640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.053835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.053858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.054109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.054133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.054306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.054338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.054503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.054528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.054648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.054673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.054936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.054962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.055075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.055097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.055303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.055337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.055591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.055615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.055803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.055827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.056001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.056025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.056284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.056308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.056515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.056539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.056773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.056797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.057040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.057064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.057238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.057263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.057519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.057545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.057714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.057737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.057922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.057946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.058206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.058231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.058344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.058372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.058546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.009 [2024-12-14 03:18:07.058571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.009 qpair failed and we were unable to recover it. 00:36:52.009 [2024-12-14 03:18:07.058690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.058714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.058894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.058918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.059201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.059226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.059517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.059542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.059752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.059776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.060022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.060046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.060277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.060302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.060479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.060502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.060703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.060728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.060985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.061010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.061288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.061333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.061617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.061646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.061812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.061836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.061924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.061945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.062127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.062152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.062331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.062358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.062633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.062658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.062941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.062965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.063229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.063255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.063493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.063518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.063751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.063775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.064033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.064059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.064294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.064327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.064563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.064588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.064760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.064784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.064916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.064941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.065201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.065226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.065389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.065413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.065673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.065698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.065864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.065888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.066129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.066155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.066417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.066442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.066693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.066718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.066927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.066952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.067213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.067237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.067404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.067429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.067685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.067710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.067887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.067910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.068109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.068139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.068374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.068399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.010 [2024-12-14 03:18:07.068631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.010 [2024-12-14 03:18:07.068656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.010 qpair failed and we were unable to recover it. 00:36:52.011 [2024-12-14 03:18:07.068790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.011 [2024-12-14 03:18:07.068815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.068983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-12-14 03:18:07.069006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.069283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-12-14 03:18:07.069308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.069525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-12-14 03:18:07.069550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.069786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-12-14 03:18:07.069810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.069976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-12-14 03:18:07.069999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.070162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-12-14 03:18:07.070186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.070410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-12-14 03:18:07.070436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.070603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-12-14 03:18:07.070628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.070904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-12-14 03:18:07.070929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.071178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-12-14 03:18:07.071202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.071445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-12-14 03:18:07.071471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.071635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-12-14 03:18:07.071660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.071913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-12-14 03:18:07.071938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.072195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-12-14 03:18:07.072221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.072483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-12-14 03:18:07.072508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.072742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-12-14 03:18:07.072766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.072876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-12-14 03:18:07.072898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.073066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-12-14 03:18:07.073089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.073283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-12-14 03:18:07.073307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.073519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-12-14 03:18:07.073544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.073802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-12-14 03:18:07.073826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.074008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-12-14 03:18:07.074033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.074265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.293 [2024-12-14 03:18:07.074290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.293 qpair failed and we were unable to recover it. 00:36:52.293 [2024-12-14 03:18:07.074529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.074558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.074735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.074760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.074957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.074981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.075261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.075285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.075459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.075484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.075717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.075743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.075926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.075951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.076121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.076145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.076380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.076407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.076523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.076545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.076707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.076731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.076892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.076916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.077088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.077113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.077295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.077335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.077543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.077568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.077734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.077758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.077949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.077973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.078143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.078167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.078273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.078295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.078538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.078562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.078724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.078748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.078910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.078934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.079199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.079223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.079411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.079436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.079608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.079633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.079887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.079910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.080075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.080098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.080289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.080321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.080517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.080544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.080654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.080677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.080788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.080809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.080904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.080926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.081055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.081079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.081242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.081267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.081461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.081487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.081781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.081805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.081986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.082010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.082197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.082221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.082394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.082418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.082594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.082619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.082794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.082819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.083073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.294 [2024-12-14 03:18:07.083098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.294 qpair failed and we were unable to recover it. 00:36:52.294 [2024-12-14 03:18:07.083360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.083385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.083550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.083575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.083832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.083855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.084019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.084045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.084301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.084333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.084494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.084518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.084697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.084721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.084818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.084840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.085037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.085062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.085295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.085336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.085496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.085520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.085736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.085760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.086010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.086035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.086230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.086255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.086435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.086461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.086714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.086739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.086996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.087022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.087128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.087150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.087387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.087412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.087612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.087637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.087832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.087857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.088090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.088114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.088274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.088298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.088475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.088500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.088662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.088687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.088977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.089002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.089186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.089216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.089472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.089498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.089696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.089720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.089890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.089914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.090191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.090215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.090448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.090473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.090608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.090632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.090797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.090822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.091079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.091104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.091396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.091426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.091666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.091688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.091941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.091966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.092169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.092194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.092356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.092381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.092571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.092596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.092878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.092902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.295 [2024-12-14 03:18:07.093091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.295 [2024-12-14 03:18:07.093116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.295 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.093291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.093322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.093523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.093546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.093732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.093757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.093878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.093903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.094153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.094178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.094461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.094487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.094765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.094789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.094895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.094918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.095176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.095201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.095455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.095481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.095720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.095760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.095945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.095980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.096165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.096199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.096394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.096429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.096636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.096678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.096879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.096903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.097029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.097054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.097232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.097256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.097514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.097540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.097673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.097697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.097837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.097870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.098067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.098106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.098306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.098352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.098489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.098524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.098786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.098821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.099104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.099139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.099443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.099470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.099751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.099796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.100079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.100113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.100390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.100425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.100563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.100599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.100898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.100933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.101221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.101256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.101463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.101506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.101707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.101732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.101910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.101933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.102120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.102154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.102440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.102476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.102629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.102655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.102846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.102870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.103051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.103076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.103283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.296 [2024-12-14 03:18:07.103308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.296 qpair failed and we were unable to recover it. 00:36:52.296 [2024-12-14 03:18:07.103555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.103589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.103725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.103760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.104013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.104046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.104236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.104270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.104440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.104488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.104665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.104689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.104922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.104956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.105209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.105246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.105476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.105502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.105631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.105666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.105873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.105910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.106149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.106185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.106353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.106390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.106596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.106621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.106812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.106851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.107159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.107194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.107407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.107443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.107724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.107759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.107953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.107978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.108093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.108119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.108354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.108380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.108514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.108538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.108732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.108757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.108934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.108958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.109218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.109253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.109589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.109615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.109855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.109880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.110142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.110167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.110415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.110440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.110675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.110701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.110956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.110980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.111211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.111236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.111431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.111457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.111702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.111727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.111911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.111936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.112054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.112079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.112300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.297 [2024-12-14 03:18:07.112336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.297 qpair failed and we were unable to recover it. 00:36:52.297 [2024-12-14 03:18:07.112614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.112657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.112815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.112850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.113001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.113035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.113230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.113262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.113486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.113524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.113802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.113838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.114074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.114109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.114245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.114280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.114538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.114563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.114690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.114725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.115014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.115048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.115239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.115283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.115487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.115513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.115751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.115775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.115958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.115982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.116165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.116197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.116331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.116357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.116594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.116621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.116837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.116861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.117054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.117079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.117250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.117274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.117501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.117537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.117766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.117805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.118048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.118083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.118348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.118385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.118640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.118675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.118804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.118845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.118970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.119003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.119204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.119238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.119460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.119485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.119667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.119692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.119817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.119842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.120028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.120053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.120165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.120190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.120439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.120466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.120587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.120610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.120803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.120837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.120987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.121021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.121228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.121261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.121500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.121537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.121762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.121796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.122079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.298 [2024-12-14 03:18:07.122114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.298 qpair failed and we were unable to recover it. 00:36:52.298 [2024-12-14 03:18:07.122337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.122374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.122583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.122608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.122841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.122866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.123101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.123126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.123292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.123323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.123548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.123583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.123785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.123820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.124090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.124126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.124331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.124368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.124634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.124660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.124854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.124878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.125170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.125209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.125490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.125526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.125735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.125769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.126067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.126101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.126367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.126414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.126693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.126739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.126983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.127018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.127217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.127251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.127479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.127515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.127708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.127742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.128016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.128051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.128310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.128357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.128518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.128552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.128774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.128801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.129009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.129033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.129212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.129248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.129453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.129479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.129660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.129688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.129953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.129977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.130145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.130169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.130362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.130399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.130700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.130725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.130860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.130885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.131134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.131159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.131391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.131416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.131602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.131636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.299 [2024-12-14 03:18:07.131947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.299 [2024-12-14 03:18:07.131984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.299 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.132197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.132231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.132473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.132509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.132708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.132733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.132844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.132870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.133204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.133228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.133350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.133375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.133491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.133515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.133749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.133774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.133889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.133914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.134123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.134150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.134383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.134409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.134568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.134593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.134714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.134740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.135014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.135039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.135148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.135173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.135440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.135475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.135636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.135661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.135798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.135830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.136050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.136085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.136272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.136308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.136507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.136533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.136673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.136697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.136897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.136931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.137143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.137177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.137387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.137422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.137573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.137608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.137856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.137891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.138191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.138225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.138445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.138471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.138656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.138680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.138804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.138828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.139068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.139102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.139337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.139374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.139529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.139563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.139766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.139801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.140013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.140048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.140246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.140279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.140443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.140478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.140688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.140724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.140966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.300 [2024-12-14 03:18:07.140991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.300 qpair failed and we were unable to recover it. 00:36:52.300 [2024-12-14 03:18:07.141193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.141217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.141380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.141410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.141542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.141567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.141817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.141843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.142057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.142081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.142250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.142274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.142492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.142517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.142639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.142663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.142801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.142826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.143067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.143092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.143344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.143370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.143498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.143524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.143705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.143730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.144003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.144038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.144306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.144354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.144567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.144602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.144801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.144835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.145110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.145137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.145384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.145411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.145628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.145653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.145786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.145811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.146070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.146095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.146353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.146379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.146517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.146541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.146708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.146731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.146914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.146951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.147163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.147197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.147377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.147417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.147634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.147679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.147837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.147863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.148134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.148170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.148469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.148505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.148663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.148687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.148809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.148834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.149098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.301 [2024-12-14 03:18:07.149124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.301 qpair failed and we were unable to recover it. 00:36:52.301 [2024-12-14 03:18:07.149325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.149349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.149539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.149564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.149679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.149704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.149874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.149899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.150086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.150110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.150278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.150303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.150455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.150490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.150631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.150665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.150815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.150849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.150977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.151003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.151196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.151230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.151439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.151476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.151672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.151707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.151888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.151913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.152170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.152206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.152435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.152470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.152736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.152771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.153079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.153115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.153446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.153480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.153703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.153738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.153879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.153903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.154027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.154051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.154145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.154167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.154287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.154327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.154571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.154597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.154688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.154711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.154888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.154912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.155107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.155131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.155391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.155418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.155561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.155585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.155815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.155851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.156054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.156089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.156298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.156345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.156505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.156531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.156725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.156764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.157023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.157057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.157344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.157379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.157530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.157562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.157715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.157750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.302 [2024-12-14 03:18:07.157992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.302 [2024-12-14 03:18:07.158019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.302 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.158221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.158256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.158493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.158519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.158653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.158687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.158839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.158872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.159190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.159225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.159516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.159552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.159741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.159775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.159995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.160030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.160338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.160375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.160584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.160619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.160867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.160903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.161149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.161184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.161423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.161458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.161599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.161635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.161833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.161868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.162144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.162178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.162436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.162471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.162614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.162648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.162857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.162884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.163067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.163094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.163350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.163376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.163571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.163604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.163798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.163822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.163936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.163960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.164223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.164259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.164400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.164435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.164704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.164739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.164933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.164958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.165256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.165280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.165458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.165485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.165737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.165761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.165959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.165994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.166193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.166230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.166379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.166415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.166608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.166633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.166879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.166916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.167101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.167136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.167329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.167365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.167516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.167542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.167660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.167685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.167889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.167915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.168080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.303 [2024-12-14 03:18:07.168103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.303 qpair failed and we were unable to recover it. 00:36:52.303 [2024-12-14 03:18:07.168324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.168361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.168642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.168668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.168778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.168802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.169099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.169135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.169439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.169476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.169626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.169660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.169953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.170002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.170281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.170344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.170505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.170540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.170681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.170715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.171011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.171046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.171175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.171209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.171487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.171523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.171729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.171764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.172088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.172122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.172435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.172470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.172677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.172711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.172978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.173014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.173224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.173260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.173463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.173497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.173692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.173728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.173989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.174013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.174220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.174245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.174360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.174384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.174517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.174541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.174741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.174765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.174864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.174887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.175017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.175041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.175218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.175243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.175378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.175404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.175591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.175615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.175795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.175821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.176074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.176098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.176332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.176361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.176545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.176569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.176699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.176726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.177021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.177055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.177252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.177286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.177524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.177559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.177814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.177838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.178103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.178149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.178462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.178498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.304 [2024-12-14 03:18:07.178689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.304 [2024-12-14 03:18:07.178725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.304 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.178861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.178885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.179143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.179169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.179370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.179396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.179599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.179624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.179797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.179821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.180161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.180197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.180403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.180438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.180690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.180713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.180883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.180918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.181046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.181081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.181234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.181268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.181424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.181459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.181683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.181718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.182022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.182056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.182341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.182379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.182589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.182622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.182816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.182850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.183162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.183198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.183452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.183488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.183746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.183782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.183905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.183940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.184088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.184122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.184264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.184297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.184517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.184551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.184764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.184798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.185168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.185203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.185400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.185436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.185644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.185678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.185993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.186018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.186148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.186172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.186360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.186386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.186492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.186514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.186691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.186715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.186952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.186987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.187310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.187367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.187505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.187540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.187695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.187730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.187877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.187902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.188161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.188195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.188458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.188494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.188637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.188671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.188859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.305 [2024-12-14 03:18:07.188893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.305 qpair failed and we were unable to recover it. 00:36:52.305 [2024-12-14 03:18:07.189078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.189113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.189325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.189361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.189588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.189624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.189864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.189901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.190208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.190233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.190484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.190510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.190633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.190666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.190865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.190900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.191217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.191251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.191479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.191516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.191722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.191755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.191910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.191944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.192098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.192132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.192346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.192381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.192583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.192618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.192756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.192789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.193064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.193093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.193279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.193304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.193426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.193451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.193625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.193648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.193878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.193903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.193997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.194019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.194215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.194296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.194511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.194551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.194735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.194763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.195051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.195076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.195193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.195216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.195426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.195451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.195576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.195600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.195738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.195764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.195873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.195895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.196089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.196116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.196309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.196344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.196447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.196473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.196603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.196628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.196760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.306 [2024-12-14 03:18:07.196786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.306 qpair failed and we were unable to recover it. 00:36:52.306 [2024-12-14 03:18:07.197028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.197053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.197329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.197355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.197472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.197495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.197693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.197717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.197843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.197867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.197979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.198005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.198291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.198325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.198510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.198538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.198722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.198746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.198871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.198898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.199161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.199186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.199363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.199389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.199630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.199654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.199817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.199844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.200118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.200142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.200375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.200401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.200576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.200601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.200700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.200721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.200981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.201008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.201262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.201288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.201522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.201547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.201739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.201764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.201881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.201906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.202109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.202134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.202326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.202351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.202491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.202515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.202699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.202733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.202875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.202911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.203115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.203150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.203376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.203412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.203681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.203717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.203972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.204005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.204206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.204240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.204383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.204418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.204602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.204636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.204844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.204877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.205136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.205171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.205360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.205396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.205602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.205637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.205855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.205889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.206123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.206148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.206454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.206479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.206657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.307 [2024-12-14 03:18:07.206684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.307 qpair failed and we were unable to recover it. 00:36:52.307 [2024-12-14 03:18:07.206921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.206955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.207251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.207287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.207598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.207634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.207859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.207893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.208169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.208204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.208392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.208429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.208634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.208668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.208866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.208891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.209095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.209130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.209413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.209448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.209586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.209621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.209832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.209866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.210137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.210170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.210411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.210446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.210598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.210632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.210906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.210942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.211203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.211229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.211469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.211494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.211750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.211784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.212111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.212145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.212445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.212482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.212690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.212715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.212960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.212984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.213092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.213116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.213360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.213396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.213553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.213587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.213854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.213889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.214034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.214067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.214249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.214283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.214527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.214564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.214726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.214759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.214957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.214983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.215239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.215267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.215472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.215498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.215620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.215644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.215775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.215799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.216013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.216039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.216267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.216292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.216455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.216483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.216686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.216720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.216852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.216885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.217158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.217193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.308 qpair failed and we were unable to recover it. 00:36:52.308 [2024-12-14 03:18:07.217352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.308 [2024-12-14 03:18:07.217388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.217542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.217577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.217846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.217881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.218140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.218175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.218411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.218447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.218726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.218751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.219051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.219086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.219326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.219362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.219571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.219615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.219745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.219770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.220036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.220060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.220330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.220355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.220473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.220498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.220611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.220635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.220833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.220858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.221126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.221170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.221366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.221403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.221604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.221643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.221846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.221871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.222143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.222168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.222372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.222398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.222583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.222609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.222809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.222833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.223054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.223082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.223261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.223285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.223464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.223489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.223757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.223792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.224071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.224105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.224333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.224369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.224570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.224604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.224810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.224836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.225053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.225088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.225230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.225264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.225462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.225497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.225721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.225756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.225943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.225986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.226169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.226193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.226376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.226402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.226576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.226601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.226800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.226824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.309 [2024-12-14 03:18:07.227048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.309 [2024-12-14 03:18:07.227073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.309 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.227256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.227281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.227522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.227547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.227787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.227812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.228009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.228038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.228215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.228241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.228423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.228448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.228623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.228647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.228814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.228839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.229025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.229050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.229226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.229250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.229436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.229463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.229700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.229725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.229840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.229864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.230054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.230079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.230273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.230297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.230530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.230566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.230786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.230810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.231123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.231157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.231424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.231460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.231743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.231768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.231977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.232001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.232260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.232300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.232508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.232543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.232699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.232734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.233036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.233061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.233342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.233369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.233622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.233647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.233850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.233876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.234077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.234102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.234369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.234395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.234717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.234746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.234929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.234974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.235175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.235210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.235417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.235454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.310 [2024-12-14 03:18:07.235643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.310 [2024-12-14 03:18:07.235678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.310 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.235872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.235897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.236084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.236109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.236285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.236309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.236568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.236593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.236804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.236828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.237144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.237179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.237399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.237434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.237691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.237726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.237951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.237984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.238257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.238293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.238440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.238475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.238697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.238732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.238880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.238912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.239195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.239232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.239545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.239581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.239744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.239778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.239923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.239956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.240081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.240107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.240287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.240350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.240630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.240665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.240872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.240907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.241160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.241194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.241520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.241555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.241692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.241727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.241911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.241945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.242202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.242237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.242440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.242475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.242745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.242778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.242915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.242959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.243134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.243159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.243379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.243413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.243615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.243649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.243830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.243857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.244126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.244161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.244448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.244483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.244624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.244658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.244854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.244899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.245215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.245253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.245448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.245485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.245766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.245801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.245955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.245980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.246238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.311 [2024-12-14 03:18:07.246272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.311 qpair failed and we were unable to recover it. 00:36:52.311 [2024-12-14 03:18:07.246492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.246528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.246782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.246816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.246966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.246989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.247236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.247271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.247543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.247579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.247785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.247820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.248082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.248117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.248442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.248477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.248686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.248720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.248977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.249011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.249284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.249330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.249472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.249508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.249757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.249782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.249913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.249938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.250125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.250161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.250346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.250384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.250585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.250619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.250803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.250827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.251138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.251173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.251379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.251415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.251670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.251705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.251868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.251897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.252157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.252190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.252436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.252472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.252682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.252716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.252926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.252961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.253284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.253327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.253528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.253563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.253686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.253720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.254040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.254064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.254248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.254274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.254471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.254496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.254727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.254752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.254936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.254962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.255100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.255124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.255253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.255279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.312 [2024-12-14 03:18:07.255494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.312 [2024-12-14 03:18:07.255519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.312 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.255679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.255704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.255893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.255928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.256205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.256240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.256491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.256527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.256730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.256765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.256986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.257012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.257261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.257285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.257418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.257444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.257578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.257603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.257790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.257815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.258061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.258086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.258289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.258323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.258506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.258531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.258634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.258657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.258844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.258869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.259149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.259193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.259410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.259446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.259654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.259688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.259906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.259941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.260205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.260239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.260436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.260472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.260674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.260708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.260993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.261027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.261247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.261280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.261480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.261515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.261855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.261937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.262222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.262261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.262494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.262533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.262687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.262723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.262959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.262993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.263196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.263230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.263416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.263452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.263736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.263769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.263900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.263935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.264199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.264237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.264470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.264508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.264659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.264693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.264930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.264965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.265098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.265138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.265441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.265477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.265608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.265643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.265797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.313 [2024-12-14 03:18:07.265833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.313 qpair failed and we were unable to recover it. 00:36:52.313 [2024-12-14 03:18:07.266064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.266099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.266377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.266426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.266640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.266674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.266863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.266898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.267104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.267130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.267255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.267279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.267440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.267466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.267599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.267624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.267765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.267799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.267952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.267985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.268135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.268173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.268455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.268492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.268680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.268715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.268953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.268987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.269178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.269213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.269408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.269443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.269636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.269676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.269890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.269925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.270048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.270082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.270208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.270232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.270356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.270380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.270666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.270692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.270925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.270960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.271218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.271252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.271464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.271500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.271778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.271814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.272059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.272093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.272380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.272417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.272565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.272601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.272792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.272826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.273104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.273138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.273414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.273449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.273578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.273612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.273818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.273843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.314 qpair failed and we were unable to recover it. 00:36:52.314 [2024-12-14 03:18:07.274076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.314 [2024-12-14 03:18:07.274101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.274300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.274335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.274526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.274551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.274671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.274695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.274932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.274969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.275235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.275270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.275477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.275514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.275778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.275812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.276034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.276060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.276348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.276375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.276494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.276518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.276703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.276729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.276909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.276943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.277141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.277175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.277440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.277477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.277700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.277735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.277890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.277926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.278133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.278160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.278441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.278477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.278628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.278662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.278786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.278819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.279079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.279122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.279226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.279249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.279424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.279450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.279656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.279679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.279853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.279878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.280146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.280171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.280364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.280389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.280577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.280601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.280885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.280910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.281084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.281114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.281278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.281304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.281508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.281532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.281722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.281747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.282013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.282039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.282242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.282265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.282417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.282443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.282676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.282701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.282872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.282897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.283074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.283098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.283351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.283377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.315 [2024-12-14 03:18:07.283560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.315 [2024-12-14 03:18:07.283586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.315 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.283789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.283813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.283929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.283953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.284128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.284154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.284359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.284386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.284572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.284597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.284730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.284754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.284869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.284893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.285000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.285024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.285281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.285307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.285549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.285574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.285751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.285776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.286014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.286039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.286271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.286297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.286487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.286513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.286701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.286727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.287011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.287041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.287276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.287301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.287544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.287569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.287803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.287828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.287939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.287962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.288216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.288241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.288496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.288523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.288788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.288813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.288981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.289006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.289195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.289220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.289403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.289429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.289616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.289640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.289828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.289854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.289971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.289994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.290245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.290270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.290502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.290528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.290781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.290808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.291001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.291026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.291189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.291213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.291458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.291484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.291737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.291763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.291979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.292005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.292186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.292211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.292464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.292490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.292667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.292692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.292877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.316 [2024-12-14 03:18:07.292903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.316 qpair failed and we were unable to recover it. 00:36:52.316 [2024-12-14 03:18:07.293022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.293048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.293154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.293187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.293294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.293329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.293450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.293474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.293658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.293685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.293858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.293884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.294058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.294083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.294193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.294216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.294396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.294422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.294510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.294533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.294720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.294745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.295003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.295029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.295213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.295238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.295351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.295376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.295475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.295499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.295597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.295621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.295788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.295813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.295918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.295942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.296121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.296146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.296345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.296371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.296609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.296635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.296827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.296853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.297021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.297046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.297167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.297191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.297369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.297395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.297489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.297512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.297630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.297655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.297765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.297789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.298021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.298046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.298165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.298188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.298442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.298468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.298652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.298678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.298799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.298821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.299029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.299054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.299234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.299261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.299441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.299466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.299748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.299774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.299872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.299896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.300079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.300104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.300228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.300253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.300441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.300467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.300696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.300721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.317 [2024-12-14 03:18:07.300974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.317 [2024-12-14 03:18:07.301000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.317 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.301175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.301202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.301380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.301408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.301597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.301622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.301856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.301882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.302047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.302072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.302230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.302254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.302364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.302388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.302498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.302521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.302785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.302809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.302932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.302958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.303124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.303150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.303321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.303346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.303458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.303483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.303620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.303644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.303809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.303835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.304014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.304039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.304165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.304190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.304385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.304412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.304510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.304533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.304771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.304795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.304973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.304999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.305113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.305137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.305396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.305421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.305616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.305642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.305750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.305776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.305886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.305920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.306122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.306152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.306320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.306346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.306520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.306545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.306724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.306749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.306913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.306938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.307138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.307162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.307327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.307353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.307536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.307560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.307728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.307753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.307854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.307876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.308109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.308133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.308239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.308263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.318 qpair failed and we were unable to recover it. 00:36:52.318 [2024-12-14 03:18:07.308386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.318 [2024-12-14 03:18:07.308411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.308596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.308621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.308790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.308814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.308998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.309023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.309124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.309148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.309402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.309428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.309650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.309674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.309920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.309944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.310120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.310145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.310309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.310343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.310446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.310468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.310553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.310575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.310810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.310834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.310954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.310979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.311107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.311131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.311299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.311336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.311567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.311592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.311759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.311783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.311944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.311969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.312068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.312090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.312270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.312294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.312431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.312456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.312652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.312676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.312774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.312798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.312975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.312999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.313162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.313186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.313283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.313307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.313548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.313573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.313679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.313702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.313882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.313907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.314077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.314100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.314211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.314233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.314330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.314353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.314521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.314545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.314639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.314661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.314909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.314934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.315142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.315167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.315335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.315361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.315524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.315548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.315729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.315754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.315851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.319 [2024-12-14 03:18:07.315873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.319 qpair failed and we were unable to recover it. 00:36:52.319 [2024-12-14 03:18:07.316150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.316175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.316260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.316282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.316408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.316433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.316615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.316641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.316878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.316903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.317080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.317105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.317334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.317359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.317590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.317615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.317889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.317913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.318167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.318191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.318449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.318474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.318705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.318729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.318844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.318868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.318981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.319004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.319110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.319134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.319258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.319282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.319473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.319498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.319653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.319676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.319786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.319810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.319967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.319992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.320113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.320138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.320299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.320340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.320469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.320494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.320657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.320681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.320914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.320938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.321174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.321199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.321382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.321408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.321502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.321527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.321633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.321659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.321858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.321882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.322116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.322140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.322332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.322357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.322592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.322616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.322870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.322894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.323068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.323092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.323258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.323283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.323501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.323526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.323709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.323733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.323991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.324014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.324115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.324139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.324414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.324439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.324604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.324628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.320 [2024-12-14 03:18:07.324791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.320 [2024-12-14 03:18:07.324819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.320 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.324927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.324951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.325073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.325097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.325203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.325228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.325385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.325410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.325658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.325682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.325845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.325870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.326121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.326145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.326394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.326419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.326645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.326669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.326834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.326858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.327061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.327085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.327283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.327307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.327572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.327596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.327762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.327787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.328008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.328039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.328230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.328254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.328365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.328389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.328664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.328688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.328969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.328993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.329150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.329174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.329305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.329338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.329586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.329611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.329868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.329891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.329996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.330020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.330201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.330225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.330400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.330425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.330588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.330616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.330779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.330803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.330924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.330946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.331130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.331155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.331274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.331297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.331475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.331500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.331620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.331642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.331751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.331774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.332030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.332054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.332160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.332184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.332504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.332529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.332714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.332738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.332862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.332886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.333061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.333085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.333201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.333225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.333407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.321 [2024-12-14 03:18:07.333432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.321 qpair failed and we were unable to recover it. 00:36:52.321 [2024-12-14 03:18:07.333613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.333637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.333888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.333912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.334149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.334173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.334404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.334428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.334685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.334708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.334880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.334904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.335094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.335118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.335297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.335327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.335589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.335613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.335805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.335829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.336083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.336108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.336335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.336364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.336459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.336483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.336643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.336667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.336895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.336919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.337174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.337198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.337368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.337394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.337642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.337666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.337836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.337861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.338107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.338132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.338366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.338391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.338570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.338602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.338766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.338791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.339064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.339087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.339364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.339389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.339585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.339609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.339856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.339882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.340056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.340081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.340252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.340276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.340473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.340498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.340770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.340794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.340988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.341012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.341269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.341292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.341492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.341516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.341693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.341716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.341969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.341993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.342236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.342261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.342536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.342561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.342792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.342817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.343047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.343072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.343237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.343260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.343515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.343540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.343714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.322 [2024-12-14 03:18:07.343739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.322 qpair failed and we were unable to recover it. 00:36:52.322 [2024-12-14 03:18:07.343994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.344019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.344131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.344153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.344392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.344417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.344590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.344614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.344803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.344827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.345113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.345137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.345236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.345258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.345353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.345376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.345506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.345531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.345787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.345811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.345980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.346004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.346179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.346203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.346474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.346499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.346595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.346618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.346867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.346891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.347054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.347079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.347192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.347215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.347380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.347406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.347607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.347631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.347860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.347884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.348058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.348083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.348279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.348303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.348488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.348513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.348734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.348759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.348884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.348908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.349096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.349120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.349393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.349418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.349651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.349676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.349914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.349939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.350048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.350072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.350330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.350355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.350543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.350567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.350839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.350862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.351110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.351135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.351256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.351281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.351483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.351508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.351668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.351698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.351942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.323 [2024-12-14 03:18:07.351965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.323 qpair failed and we were unable to recover it. 00:36:52.323 [2024-12-14 03:18:07.352126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.352150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.352406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.352432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.352638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.352662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.352846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.352870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.353125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.353149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.353336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.353361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.353548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.353573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.353735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.353759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.354033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.354058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.354252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.354277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.354400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.354423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.354659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.354683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.354922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.354946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.355223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.355248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.355408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.355433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.355637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.355661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.355888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.355913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.356015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.356037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.356232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.356260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.356443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.356468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.356676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.356701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.356988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.357014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.357193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.357218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.357451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.357476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.357712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.357736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.357848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.357881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.358080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.358105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.358347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.358372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.358629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.358653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.358818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.358842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.359013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.359038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.359297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.359330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.359512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.359537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.359776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.359800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.359962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.359986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.360241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.360265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.360505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.360531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.360763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.360789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.361084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.361109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.361333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.361358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.361526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.361551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.361680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.361705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.324 [2024-12-14 03:18:07.361945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.324 [2024-12-14 03:18:07.361969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.324 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.362131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.362155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.362357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.362383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.362556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.362580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.362799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.362824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.363059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.363083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.363199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.363223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.363481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.363506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.363741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.363765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.364002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.364027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.364145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.364169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.364407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.364432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.364614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.364639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.364874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.364899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.365149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.365173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.365403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.365430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.365626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.365650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.365826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.365850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.366010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.366035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.366268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.366292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.366505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.366529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.366816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.366843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.367009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.367034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.367264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.367288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.367567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.367595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.367875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.367899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.368088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.368112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.368302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.368336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.368590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.368616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.368873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.368898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.369119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.369143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.369394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.369419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.369606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.369630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.369865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.369890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.370148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.370175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.370428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.370453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.370727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.370752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.371044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.371068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.371250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.371275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.371489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.371514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.325 [2024-12-14 03:18:07.371784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.325 [2024-12-14 03:18:07.371808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.325 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.372093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.372118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.372236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.372260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.372493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.372519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.372638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.372663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.372792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.372815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.372987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.373012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.373204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.373229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.373393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.373421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.373582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.373607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.373887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.373912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.374027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.374057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.374239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.374264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.374387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.374411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.374600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.374625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.374811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.374841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.375100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.375124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.375360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.375384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.375645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.375670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.375854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.375879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.376069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.376093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.376332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.376358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.376594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.376618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.376785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.376809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.377015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.377040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.377278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.377302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.377444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.377468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.377598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.377622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.377734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.377759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.377855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.377878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.378079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.378104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.378267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.378292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.378565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.378590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.378733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.378757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.378973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.378999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.379165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.379190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.379375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.379401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.379494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.379516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.379633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.379664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.379845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.379869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.380059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.380084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.380325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.380350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.380592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.380617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.326 qpair failed and we were unable to recover it. 00:36:52.326 [2024-12-14 03:18:07.380807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.326 [2024-12-14 03:18:07.380831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.381120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.381145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.381378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.381403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.381605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.381630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.381748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.381772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.381875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.381897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.382087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.382111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.382276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.382301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.382447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.382472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.382664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.382689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.382867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.382891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.383104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.383128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.383363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.383389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.383570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.383595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.383842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.383866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.384053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.384077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.384258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.384283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.384559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.384584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.384709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.384734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.384857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.384881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.385049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.385074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.385305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.385341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.385547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.385577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.385771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.385807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.386015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.386049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.386252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.386286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.386468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.386493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.386774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.386799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.387007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.387032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.387217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.387241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.387500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.387524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.327 [2024-12-14 03:18:07.387635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.327 [2024-12-14 03:18:07.387659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.327 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.387960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.387985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.388222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.388246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.388370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.388395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.388555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.388580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.388704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.388728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.388906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.388929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.389093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.389118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.389331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.389379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.389545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.389580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.389720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.389755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.389979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.390014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.390290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.390352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.390580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.390615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.390855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.390890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.391084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.391118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.391301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.391351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.391630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.391666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.391912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.391946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.392215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.392250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.392488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.392514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.392651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.392676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.392985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.393011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.393204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.393228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.393418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.393444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.393666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.393691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.393922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.393947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.394205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.394238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.394497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.394532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.394733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.394767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.395076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.395110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.395388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.395425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.395707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.395739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.395928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.395953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.396214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.396239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.396500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.396526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.396772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.396806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.397007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.397042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.397246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.397281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.397489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.397512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.397634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.397659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.397916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.397940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.398198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.398222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.328 qpair failed and we were unable to recover it. 00:36:52.328 [2024-12-14 03:18:07.398341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.328 [2024-12-14 03:18:07.398367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.329 qpair failed and we were unable to recover it. 00:36:52.329 [2024-12-14 03:18:07.398553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.329 [2024-12-14 03:18:07.398578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.329 qpair failed and we were unable to recover it. 00:36:52.329 [2024-12-14 03:18:07.398858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.329 [2024-12-14 03:18:07.398883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.329 qpair failed and we were unable to recover it. 00:36:52.329 [2024-12-14 03:18:07.399018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.329 [2024-12-14 03:18:07.399053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.329 qpair failed and we were unable to recover it. 00:36:52.329 [2024-12-14 03:18:07.399309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.329 [2024-12-14 03:18:07.399354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.329 qpair failed and we were unable to recover it. 00:36:52.329 [2024-12-14 03:18:07.399560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.329 [2024-12-14 03:18:07.399604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.329 qpair failed and we were unable to recover it. 00:36:52.329 [2024-12-14 03:18:07.399807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.329 [2024-12-14 03:18:07.399833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.329 qpair failed and we were unable to recover it. 00:36:52.329 [2024-12-14 03:18:07.399957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.329 [2024-12-14 03:18:07.399983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.329 qpair failed and we were unable to recover it. 00:36:52.329 [2024-12-14 03:18:07.400279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.329 [2024-12-14 03:18:07.400306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.329 qpair failed and we were unable to recover it. 00:36:52.329 [2024-12-14 03:18:07.400497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.329 [2024-12-14 03:18:07.400522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.329 qpair failed and we were unable to recover it. 00:36:52.329 [2024-12-14 03:18:07.400641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.329 [2024-12-14 03:18:07.400665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.329 qpair failed and we were unable to recover it. 00:36:52.329 [2024-12-14 03:18:07.400856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.329 [2024-12-14 03:18:07.400881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.329 qpair failed and we were unable to recover it. 00:36:52.329 [2024-12-14 03:18:07.401071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.329 [2024-12-14 03:18:07.401095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.329 qpair failed and we were unable to recover it. 00:36:52.329 [2024-12-14 03:18:07.401269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.329 [2024-12-14 03:18:07.401294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.329 qpair failed and we were unable to recover it. 00:36:52.329 [2024-12-14 03:18:07.401494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.329 [2024-12-14 03:18:07.401520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.329 qpair failed and we were unable to recover it. 00:36:52.329 [2024-12-14 03:18:07.401638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.329 [2024-12-14 03:18:07.401663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.329 qpair failed and we were unable to recover it. 00:36:52.329 [2024-12-14 03:18:07.401788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.329 [2024-12-14 03:18:07.401817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.329 qpair failed and we were unable to recover it. 00:36:52.329 [2024-12-14 03:18:07.402011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.329 [2024-12-14 03:18:07.402034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.329 qpair failed and we were unable to recover it. 00:36:52.329 [2024-12-14 03:18:07.402280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.329 [2024-12-14 03:18:07.402306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.329 qpair failed and we were unable to recover it. 00:36:52.329 [2024-12-14 03:18:07.402584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.329 [2024-12-14 03:18:07.402609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.329 qpair failed and we were unable to recover it. 00:36:52.329 [2024-12-14 03:18:07.402838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.329 [2024-12-14 03:18:07.402862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.329 qpair failed and we were unable to recover it. 00:36:52.329 [2024-12-14 03:18:07.402985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.329 [2024-12-14 03:18:07.403009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.329 qpair failed and we were unable to recover it. 00:36:52.613 [2024-12-14 03:18:07.403241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.613 [2024-12-14 03:18:07.403265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.613 qpair failed and we were unable to recover it. 00:36:52.613 [2024-12-14 03:18:07.403446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.613 [2024-12-14 03:18:07.403471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.613 qpair failed and we were unable to recover it. 00:36:52.613 [2024-12-14 03:18:07.403655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.613 [2024-12-14 03:18:07.403680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.613 qpair failed and we were unable to recover it. 00:36:52.613 [2024-12-14 03:18:07.403878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.613 [2024-12-14 03:18:07.403903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.613 qpair failed and we were unable to recover it. 00:36:52.613 [2024-12-14 03:18:07.404161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.613 [2024-12-14 03:18:07.404186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.613 qpair failed and we were unable to recover it. 00:36:52.613 [2024-12-14 03:18:07.404366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.613 [2024-12-14 03:18:07.404391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.613 qpair failed and we were unable to recover it. 00:36:52.613 [2024-12-14 03:18:07.404492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.613 [2024-12-14 03:18:07.404514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.613 qpair failed and we were unable to recover it. 00:36:52.613 [2024-12-14 03:18:07.404813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.613 [2024-12-14 03:18:07.404837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.613 qpair failed and we were unable to recover it. 00:36:52.613 [2024-12-14 03:18:07.405033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.613 [2024-12-14 03:18:07.405058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.613 qpair failed and we were unable to recover it. 00:36:52.613 [2024-12-14 03:18:07.405221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.613 [2024-12-14 03:18:07.405246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.613 qpair failed and we were unable to recover it. 00:36:52.613 [2024-12-14 03:18:07.405369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.613 [2024-12-14 03:18:07.405393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.613 qpair failed and we were unable to recover it. 00:36:52.613 [2024-12-14 03:18:07.405553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.613 [2024-12-14 03:18:07.405577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.613 qpair failed and we were unable to recover it. 00:36:52.613 [2024-12-14 03:18:07.405758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.613 [2024-12-14 03:18:07.405783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.613 qpair failed and we were unable to recover it. 00:36:52.613 [2024-12-14 03:18:07.405976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.613 [2024-12-14 03:18:07.406000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.613 qpair failed and we were unable to recover it. 00:36:52.613 [2024-12-14 03:18:07.406199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.613 [2024-12-14 03:18:07.406224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.613 qpair failed and we were unable to recover it. 00:36:52.613 [2024-12-14 03:18:07.406420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.613 [2024-12-14 03:18:07.406446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.613 qpair failed and we were unable to recover it. 00:36:52.613 [2024-12-14 03:18:07.406577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.613 [2024-12-14 03:18:07.406605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.613 qpair failed and we were unable to recover it. 00:36:52.613 [2024-12-14 03:18:07.406803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.613 [2024-12-14 03:18:07.406829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.613 qpair failed and we were unable to recover it. 00:36:52.613 [2024-12-14 03:18:07.406991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.613 [2024-12-14 03:18:07.407016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.613 qpair failed and we were unable to recover it. 00:36:52.613 [2024-12-14 03:18:07.407209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.407233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.407487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.407513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.407718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.407749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.407914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.407939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.408127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.408153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.408355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.408381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.408511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.408535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.408731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.408755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.408858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.408882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.409069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.409094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.409255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.409280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.409581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.409607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.409811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.409835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.410076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.410110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.410302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.410353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.410614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.410638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.410828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.410853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.410954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.410979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.411160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.411185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.411362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.411398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.411592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.411627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.411887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.411923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.412178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.412213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.412462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.412487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.412659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.412683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.412855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.412880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.413025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.413051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.413321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.413346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.413536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.413562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.413689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.413715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.413922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.413957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.414222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.414257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.414466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.414501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.414622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.414657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.414944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.414979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.415232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.415267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.415429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.415468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.415606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.415642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.614 [2024-12-14 03:18:07.415847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.614 [2024-12-14 03:18:07.415882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.614 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.416195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.416241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.416409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.416435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.416557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.416581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.416712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.416739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.416909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.416934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.417111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.417136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.417343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.417369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.417553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.417578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.417687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.417712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.417879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.417906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.418091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.418116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.418280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.418305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.418483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.418517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.418725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.418762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.419119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.419154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.419379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.419415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.419621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.419656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.419858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.419893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.420095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.420131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.420413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.420449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.420660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.420694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.420878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.420912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.421111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.421147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.421353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.421389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.421574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.421612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.421866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.421901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.422095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.422130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.422329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.422366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.422519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.422554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.422686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.422711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.422894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.422930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.423231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.423272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.423539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.615 [2024-12-14 03:18:07.423575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.615 qpair failed and we were unable to recover it. 00:36:52.615 [2024-12-14 03:18:07.423761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.423794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.424000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.424036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.424331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.424356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.424592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.424617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.424852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.424877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.425048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.425073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.425357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.425394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.425682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.425718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.425862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.425896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.426109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.426143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.426433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.426470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.426681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.426707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.426900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.426925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.427115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.427140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.427310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.427344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.427579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.427615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.427824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.427860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.428144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.428179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.428425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.428461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.428598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.428634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.428912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.428947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.429153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.429187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.429439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.429475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.429707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.429743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.429958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.429993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.430187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.430217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.430386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.430421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.430629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.430663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.430857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.430892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.431167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.431202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.431335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.431372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.431647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.431680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.431932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.616 [2024-12-14 03:18:07.431968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.616 qpair failed and we were unable to recover it. 00:36:52.616 [2024-12-14 03:18:07.432227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.432263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.432565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.432601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.432801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.432835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.433169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.433205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.433398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.433434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.433663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.433699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.433937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.433972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.434231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.434266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.434493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.434529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.434734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.434769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.434979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.435015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.435210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.435235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.435491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.435516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.435630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.435654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.435891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.435918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.436099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.436134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.436341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.436377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.436628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.436663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.436997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.437031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.437235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.437277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.437575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.437600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.437861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.437887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.438101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.438135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.438295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.438329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.438609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.438644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.438853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.438887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.439075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.439109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.439382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.439408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.439508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.439550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.439827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.439862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.440064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.440099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.440297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.440331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.440469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.440494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.617 [2024-12-14 03:18:07.440707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.617 [2024-12-14 03:18:07.440743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.617 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.440962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.440997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.441220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.441254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.441482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.441517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.441797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.441832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.441964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.441999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.442303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.442348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.442623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.442657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.442881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.442915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.443145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.443180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.443415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.443440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.443628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.443654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.443831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.443856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.443976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.444002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.444174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.444200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.444426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.444461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.444677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.444713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.444917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.444951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.445166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.445201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.445431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.445456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.445689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.445714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.445889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.445914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.446085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.446110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.446367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.446404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.446623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.446658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.446916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.446951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.447204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.447239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.618 qpair failed and we were unable to recover it. 00:36:52.618 [2024-12-14 03:18:07.447430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.618 [2024-12-14 03:18:07.447460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.447628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.447665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.447918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.447953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.448152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.448188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.448391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.448428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.448653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.448678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.448889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.448915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.449025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.449051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.449283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.449310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.449506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.449531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.449633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.449658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.449851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.449887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.450083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.450118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.450334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.450370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.450592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.450628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.450880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.450916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.451211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.451247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.451375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.451400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.451516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.451542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.451733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.451768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.451963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.451997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.452273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.452310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.452463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.452498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.452706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.452741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.452930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.452964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.453104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.453139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.453331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.453358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.453465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.453494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.453612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.453636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.453835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.453861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.454032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.454057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.454242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.454277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.454551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.454588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.454861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.454896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.455116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.455150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.455340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.619 [2024-12-14 03:18:07.455377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.619 qpair failed and we were unable to recover it. 00:36:52.619 [2024-12-14 03:18:07.455565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.455591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.455763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.455799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.456008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.456045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.456230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.456276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.456458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.456485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.456680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.456715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.456846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.456881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.457066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.457101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.457215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.457240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.457489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.457517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.457683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.457708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.457872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.457897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.458089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.458125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.458322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.458347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.458592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.458627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.458824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.458859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.458984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.459017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.459159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.459184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.459452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.459493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.459627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.459661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.459871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.459906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.460042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.460075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.460277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.460323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.460583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.460618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.460801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.460835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.460982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.461017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.461295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.461348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.461548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.461582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.461820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.461855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.461984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.462019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.462213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.462237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.462490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.462515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.462754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.462788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.462994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.463028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.620 [2024-12-14 03:18:07.463224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.620 [2024-12-14 03:18:07.463248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.620 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.463360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.463387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.463488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.463511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.463690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.463714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.463876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.463901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.464012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.464034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.464265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.464290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.464498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.464522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.464628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.464653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.464778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.464803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.464921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.464944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.465224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.465258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.465465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.465500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.465696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.465730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.466013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.466049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.466182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.466216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.466402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.466437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.466659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.466695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.466992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.467017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.467132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.467157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.467342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.467367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.467574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.467608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.467812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.467847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.468039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.468073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.468343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.468369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.468569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.468594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.468726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.468750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.468865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.468890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.469055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.469080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.469199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.469234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.469451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.469487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.469669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.469704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.469898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.469931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.470182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.470217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.470334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.470359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.470534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.470568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.470797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.621 [2024-12-14 03:18:07.470831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.621 qpair failed and we were unable to recover it. 00:36:52.621 [2024-12-14 03:18:07.470960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.470994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.471175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.471210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.471334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.471379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.471502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.471526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.471731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.471755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.471940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.471975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.472166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.472190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.472290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.472323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.472504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.472528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.472646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.472669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.472896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.472930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.473071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.473106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.473379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.473415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.473554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.473589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.473792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.473827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.474102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.474149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.474293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.474331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.474455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.474479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.474638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.474662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.474898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.474923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.475179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.475203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.475451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.475477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.475719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.475743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.476004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.476050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.476250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.476283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.476519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.476565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.476757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.476781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.476909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.476934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.477040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.477063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.477170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.477193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.477444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.477469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.477581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.477605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.477785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.477809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.477984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.478007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.478299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.478344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.478520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.622 [2024-12-14 03:18:07.478544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.622 qpair failed and we were unable to recover it. 00:36:52.622 [2024-12-14 03:18:07.478772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.478796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.479046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.479071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.479324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.479349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.479446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.479468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.479740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.479774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.480072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.480105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.480230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.480270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.480467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.480502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.480703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.480738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.481027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.481061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.481344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.481379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.481653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.481678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.481940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.481965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.482227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.482251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.482471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.482496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.482664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.482688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.482804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.482828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.482996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.483020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.483199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.483222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.483455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.483481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.483761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.483785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.483899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.483922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.484121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.484155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.484287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.484329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.484614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.484647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.484847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.484882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.485075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.485109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.485384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.485410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.485638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.485662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.485900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.485924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.486088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.486113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.486381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.486406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.486573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.486597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.486848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.623 [2024-12-14 03:18:07.486877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.623 qpair failed and we were unable to recover it. 00:36:52.623 [2024-12-14 03:18:07.487084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.487109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.487362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.487387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.487565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.487590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.487753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.487778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.488086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.488133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.488362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.488387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.488593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.488618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.488728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.488751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.488936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.488960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.489159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.489193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.489493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.489528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.489785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.489810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.489984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.490009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.490271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.490306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.490524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.490550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.490780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.490815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.491089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.491123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.491384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.491420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.491656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.491681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.491860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.491895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.492031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.492065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.492325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.492361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.492641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.492676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.492868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.492902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.493156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.493191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.493471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.493507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.493778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.493802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.493900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.493924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.494122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.494157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.494433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.494469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.494695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.494720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.494903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.494928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.495061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.495087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.495331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.495358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.495525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.495550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.495778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.495803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.496010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.496044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.496336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.496371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.496653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.496695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.496883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.496917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.624 qpair failed and we were unable to recover it. 00:36:52.624 [2024-12-14 03:18:07.497172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.624 [2024-12-14 03:18:07.497225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.497342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.497367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.497561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.497588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.497775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.497810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.498040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.498074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.498309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.498356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.498538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.498563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.498770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.498806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.499061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.499096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.499232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.499257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.499471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.499506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.499732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.499766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.499953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.499988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.500193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.500228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.500500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.500536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.500740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.500776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.500972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.501007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.501222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.501257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.501478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.501505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.501607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.501632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.501813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.501851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.502142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.502177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.502381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.502418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.502616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.502652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.502819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.502843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.503041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.503067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.503329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.503354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.503536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.503564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.503815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.503841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.504171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.504205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.504427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.504462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.504675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.504710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.505003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.505038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.505244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.505279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.505496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.505532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.505741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.505766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.505974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.505999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.506201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.506225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.506467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.506492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.506661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.506685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.506780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.506802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.507039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.625 [2024-12-14 03:18:07.507064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.625 qpair failed and we were unable to recover it. 00:36:52.625 [2024-12-14 03:18:07.507266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.507290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.507504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.507529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.507720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.507755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.507952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.507987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.508265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.508303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.508511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.508537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.508715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.508740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.508917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.508941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.509116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.509141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.509305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.509338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.509471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.509496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.509670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.509715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.509904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.509945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.510161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.510195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.510387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.510414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.510651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.510685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.510905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.510938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.511154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.511190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.511417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.511444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.511576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.511622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.511809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.511844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.512120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.512154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.512306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.512351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.512538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.512573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.512835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.512859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.513148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.513182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.513393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.513429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.513681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.513716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.514019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.514054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.514353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.514391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.514606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.514632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.514820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.514854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.515047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.515082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.515297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.515341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.515532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.515557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.515736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.515760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.515961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.515986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.516215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.516240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.516423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.516449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.516685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.516720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.516964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.517003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.517263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.517299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.517514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.517550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.626 qpair failed and we were unable to recover it. 00:36:52.626 [2024-12-14 03:18:07.517764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.626 [2024-12-14 03:18:07.517799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.518096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.518131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.518435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.518472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.518674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.518708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.518953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.518979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.519238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.519263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.519571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.519596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.519795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.519820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.520042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.520067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.520243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.520269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.520483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.520508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.520634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.520669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.520879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.520914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.521116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.521151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.521372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.521408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.521596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.521621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.521727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.521752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.521983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.522008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.522187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.522212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.522396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.522422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.522660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.522694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.522902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.522936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.523223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.523257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.523423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.523450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.523723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.523758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.524078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.524113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.524442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.524468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.524648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.524693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.524947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.524982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.525275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.525309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.525531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.525567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.525755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.525790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.526002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.526036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.526245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.526281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.526547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.526573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.526739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.526764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.526881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.526904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.527171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.527211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.527532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.627 [2024-12-14 03:18:07.527569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.627 qpair failed and we were unable to recover it. 00:36:52.627 [2024-12-14 03:18:07.527728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.527771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.527891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.527916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.528091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.528128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.528394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.528432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.528630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.528665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.528839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.528863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.529042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.529077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.529282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.529328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.529606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.529642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.529826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.529862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.530133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.530167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.530440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.530466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.530655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.530680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.530960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.530986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.531176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.531201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.531464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.531490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.531730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.531755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.532033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.532059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.532343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.532369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.532498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.532523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.532690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.532715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.532882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.532907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.533169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.533194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.533453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.533478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.533709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.533734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.533932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.533961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.534246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.534272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.534468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.534494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.534595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.534619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.534788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.534813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.535029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.535054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.535322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.535348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.535514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.535540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.535787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.535811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.536060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.536086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.536349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.536375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.536553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.536578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.536760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.536785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.537057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.537083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.537281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.537306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.537414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.537438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.537611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.537635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.537754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.537778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.537942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.537966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.628 qpair failed and we were unable to recover it. 00:36:52.628 [2024-12-14 03:18:07.538216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.628 [2024-12-14 03:18:07.538242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.538479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.538504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.538713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.538738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.538923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.538947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.539136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.539161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.539348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.539374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.539608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.539635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.539821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.539847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.540019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.540049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.540285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.540310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.540454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.540479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.540644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.540669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.540794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.540819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.540925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.540949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.541184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.541208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.541383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.541410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.541666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.541691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.541799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.541821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.541989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.542015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.542138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.542163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.542346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.542371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.542549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.542573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.542783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.542809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.542916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.542939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.543168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.543248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.543428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.543471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.543599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.543634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.543854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.543888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.544011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.544046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.544203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.544238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.544365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.544400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.544607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.544642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.544836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.544871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.545011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.545046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.545228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.545263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.545602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.545634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.545821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.545846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.546012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.546037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.546202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.546227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.546406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.546433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.546643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.546670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.546792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.546815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.546999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.547024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.547206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.547232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.547421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.547446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.547573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.629 [2024-12-14 03:18:07.547598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.629 qpair failed and we were unable to recover it. 00:36:52.629 [2024-12-14 03:18:07.547703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.547728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.547848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.547873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.548056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.548081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.548341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.548366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.548531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.548556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.548727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.548753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.548916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.548940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.549048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.549071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.549250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.549275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.549407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.549434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.549569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.549594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.549854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.549881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.550049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.550075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.550190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.550216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.550389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.550416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.550654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.550680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.550865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.550890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.551059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.551084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.551247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.551272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.551390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.551413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.551647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.551672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.551784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.551808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.551989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.552014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.552144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.552169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.552391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.552416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.552586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.552610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.552794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.552819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.552929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.552953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.553123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.553147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.553251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.553276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.553463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.553487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.553589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.553612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.553843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.553868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.554132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.554157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.554334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.554359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.554488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.554511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.554682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.554707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.554828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.554852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.555105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.555129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.555387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.555411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.555523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.555546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.555670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.555694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.555932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.555956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.556138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.556161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.556355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.556380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.556617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.556642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.630 qpair failed and we were unable to recover it. 00:36:52.630 [2024-12-14 03:18:07.556805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.630 [2024-12-14 03:18:07.556829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.557005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.557029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.557210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.557236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.557420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.557444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.557702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.557727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.557825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.557848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.558009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.558033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.558268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.558292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.558496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.558520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.558751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.558775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.559009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.559033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.559263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.559291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.559427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.559455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.559619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.559643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.559822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.559846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.560077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.560101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.560301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.560335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.560448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.560472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.560656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.560680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.560869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.560892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.561126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.561151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.561335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.561360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.561560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.561584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.561749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.561773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.561957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.561983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.562151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.562177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.562292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.562324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.562446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.562470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.562630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.562655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.562838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.562862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.563104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.563128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.563387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.563412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.563666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.563691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.563930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.563954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.564110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.564135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.564365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.564390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.564563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.564588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.564765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.564789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.565018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.565046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.565225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.565250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.565378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.565403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.565587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.565612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.631 [2024-12-14 03:18:07.565711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.631 [2024-12-14 03:18:07.565736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.631 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.565920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.565944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.566100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.566123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.566303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.566335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.566503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.566527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.566711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.566735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.566977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.567004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.567183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.567207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.567438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.567464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.567634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.567658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.567868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.567892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.568076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.568100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.568203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.568228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.568421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.568446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.568647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.568672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.568848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.568873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.569115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.569139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.569401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.569427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.569542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.569566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.569821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.569845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.570025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.570049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.570329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.570355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.570456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.570479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.570684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.570708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.570950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.570974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.571226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.571250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.571486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.571511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.571705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.571729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.572005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.572029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.572227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.572250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.572413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.572438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.572636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.572660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.572891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.572915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.573177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.573201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.573458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.573483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.573729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.573753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.574012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.574035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.574159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.574184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.574348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.574372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.574486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.574508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.574679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.574703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.574903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.574928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.575051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.575076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.575329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.575355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.575529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.575553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.575780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.575804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.576053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.576078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.576331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.632 [2024-12-14 03:18:07.576357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.632 qpair failed and we were unable to recover it. 00:36:52.632 [2024-12-14 03:18:07.576540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.576565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.576796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.576821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.577080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.577105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.577340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.577365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.577545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.577568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.577741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.577766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.577966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.577991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.578199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.578225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.578416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.578442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.578632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.578657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.578913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.578937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.579138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.579162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.579424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.579449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.579717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.579741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.579845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.579867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.580047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.580072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.580259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.580289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.580524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.580550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.580730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.580754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.580934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.580961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.581184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.581211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.581448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.581476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.581672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.581699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.581868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.581895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.582085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.582110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.582331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.582357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.582519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.582544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.582730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.582755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.582917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.582942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.583143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.583167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.583439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.583467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.583583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.583612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.583797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.583821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.584071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.584097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.584309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.584342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.584472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.584497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.584602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.584627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.584822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.584846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.585110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.585135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.585394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.585420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.585639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.585664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.585829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.585853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.586132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.586157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.586348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.586378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.586516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.586541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.586939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.586967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.587172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.587199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.587454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.633 [2024-12-14 03:18:07.587479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.633 qpair failed and we were unable to recover it. 00:36:52.633 [2024-12-14 03:18:07.587681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.587705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.587889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.587913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.588107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.588131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.588343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.588368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.588478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.588502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.588633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.588658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.588774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.588798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.588910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.588933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.589114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.589139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.589264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.589289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.589541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.589571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.589707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.589733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.589957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.589982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.590236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.590265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.590468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.590494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.590660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.590686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.590871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.590895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.591148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.591172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.591400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.591426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.591661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.591687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.591857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.591882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.592113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.592138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.592335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.592367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.592464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.592487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.592651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.592675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.592884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.592909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.593044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.593069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.593202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.593227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.593445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.593471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.593592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.593616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.593805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.593829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.593961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.593986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.594225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.594250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.594411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.594437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.595212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.595255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.595486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.595510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.595629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.595650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.595846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.595873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.596055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.596082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.596268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.596293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.596415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.596439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.596620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.596643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.596829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.596854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.597064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.597087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.597274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.597298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.597526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.597549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.597752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.597775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.597909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.597932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.598111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.598133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.598375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.598399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.598622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.598650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.598777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.598799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.599031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.599053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.599306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.599336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.599476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.599499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.599609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.599632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.599741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.634 [2024-12-14 03:18:07.599763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.634 qpair failed and we were unable to recover it. 00:36:52.634 [2024-12-14 03:18:07.599907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.599930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.600118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.600145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.600331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.600355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.600541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.600562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.600726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.600748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.600882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.600905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.601018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.601040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.601216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.601239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.601433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.601456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.601554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.601584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.601694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.601716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.601816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.601837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.602084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.602110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.602294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.602329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.602462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.602485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.602676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.602701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.602820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.602844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.603103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.603126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.603300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.603334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.603455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.603479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.603733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.603758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.603936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.603959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.604173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.604196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.604417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.604441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.604605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.604627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.604741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.604767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.605027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.605050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.605172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.605197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.605444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.605467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.605675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.605698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.605823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.605847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.606143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.606165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.606368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.606392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.606576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.606603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.606785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.606810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.607014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.607037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.607266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.607290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.607401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.607423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.607621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.607644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.607830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.607853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.608052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.608075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.608307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.608341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.608596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.608619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.608802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.608825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.608936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.608966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.609219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.609242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.609422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.609446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.609640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.609663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.609765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.609788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.609983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.610006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.610168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.610191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.610372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.610396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.610524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.610546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.610719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.610741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.610848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.610871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.611001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.611025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.611277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.611301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.611485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.611507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.611644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.611666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.611904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.611927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.612163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.612191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.612360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.612384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.612567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.612592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.612728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.612751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.635 [2024-12-14 03:18:07.612927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.635 [2024-12-14 03:18:07.612950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.635 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.613124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.613147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.613244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.613265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.613523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.613547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.613736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.613758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.613900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.613923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.614154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.614177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.614415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.614440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.614625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.614648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.614751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.614773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.614882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.614905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.615169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.615192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.615384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.615408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.615518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.615541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.615667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.615689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.615814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.615839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.616041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.616064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.616329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.616354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.616468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.616494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.616622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.616645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.616809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.616832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.617015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.617041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.617266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.617290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.617489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.617513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.617666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.617690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.617804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.617828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.617961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.617984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.618173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.618196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.618307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.618355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.618593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.618617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.618734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.618758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.618974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.619004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.619257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.619280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.619546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.619571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.619709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.619732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.619858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.619882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.620159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.620182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.620347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.620371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.620550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.620573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.620741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.620765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.620973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.620996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.621187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.621211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.621393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.621417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.621601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.621626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.621813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.621835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.622011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.622034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.622212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.622235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.622427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.622451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.622624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.622647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.622832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.622855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.622991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.623013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.623146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.623170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.623339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.623363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.623472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.623494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.623666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.623688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.623855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.623879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.624002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.624024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.624184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.624208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.624442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.624467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.624658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.624682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.624771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.624792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.624984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.625006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.625128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.625151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.625324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.625349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.625459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.625487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.636 [2024-12-14 03:18:07.625587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.636 [2024-12-14 03:18:07.625609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.636 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.625772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.625795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.625931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.625954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.626138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.626160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.626340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.626366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.626498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.626521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.626706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.626729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.626905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.626928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.627175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.627198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.627385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.627413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.627543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.627568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.627799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.627822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.627995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.628019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.628147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.628171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.628364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.628392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.628513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.628535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.628632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.628655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.628816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.628839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.628953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.628978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.629141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.629172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.629350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.629375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.629465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.629487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.629579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.629599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.629699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.629722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.629822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.629850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.629950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.629971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.630076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.630104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.630302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.630351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.630476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.630499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.630663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.630685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.630876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.630898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.630995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.631021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.631321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.631347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.631508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.631530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.631715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.631738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.632026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.632050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.632137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.632160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.632333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.632358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.632468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.632491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.632680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.632707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.632832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.632856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.633133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.633157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.633341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.633372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.633487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.633510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.633743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.633766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.633868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.633891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.634087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.634111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.634230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.634254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.634441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.634464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.634557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.634579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.634767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.634790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.634958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.634981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.635071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.635094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.635250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.635277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.635399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.635424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.635600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.635624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.635876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.635900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.636083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.636106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.636207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.636230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.636403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.636432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.637 [2024-12-14 03:18:07.636531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.637 [2024-12-14 03:18:07.636554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.637 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.636789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.636812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.636905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.636929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.637098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.637121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.637331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.637365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.637554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.637587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.637782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.637815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.637965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.637989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.638101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.638124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.638281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.638303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.638485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.638508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.638717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.638740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.638927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.638949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.639179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.639201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.639459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.639483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.639615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.639637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.639815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.639836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.640010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.640032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.640142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.640165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.640336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.640360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.640471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.640493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.640732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.640755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.640864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.640886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.641077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.641099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.641223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.641246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.641358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.641381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.641616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.641637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.641734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.641759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.641888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.641909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.642136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.642158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.642248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.642269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.642474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.642497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.642689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.642712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.642884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.642907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.643030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.643054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.643222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.643245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.643406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.643430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.643525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.643547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.643740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.643763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.643957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.643979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.644177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.644200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.644367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.644390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.644488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.644516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.644617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.644637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.644807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.644829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.644937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.644961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.645138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.645160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.645337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.645361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.645466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.645489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.645655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.645676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.645774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.645802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.646059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.646091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.646217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.646249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.646375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.646412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.646527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.646558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.646786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.646809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.647044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.647066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.647178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.647200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.647438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.647473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.647608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.647642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.647764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.647796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.647954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.647980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.648094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.648117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.648283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.648305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.648406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.648428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.638 qpair failed and we were unable to recover it. 00:36:52.638 [2024-12-14 03:18:07.648566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.638 [2024-12-14 03:18:07.648589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.648781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.648804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.648986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.649018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.649216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.649249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.649359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.649393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.649512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.649545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.649733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.649755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.649924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.649950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.650111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.650134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.650249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.650273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.650444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.650466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.650628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.650648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.650878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.650901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.651000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.651021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.651141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.651163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.651273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.651295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.651474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.651498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.651685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.651708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.651811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.651833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.651948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.651970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.652229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.652252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.652418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.652441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.652545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.652567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.652826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.652853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.652974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.652997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.653162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.653185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.653286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.653321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.653431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.653455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.653558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.653581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.653673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.653703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.653824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.653847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.654005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.654027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.654136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.654165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.654269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.654290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.654408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.654433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.654538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.654558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.654721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.654744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.654935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.654959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.655138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.655161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.655339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.655363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.655454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.655475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.655650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.655673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.655790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.655811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.655907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.655927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.656086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.656108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.656222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.656246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.656420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.656442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.656562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.656583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.656674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.656695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.656782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.656803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.656895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.656916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.657005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.657025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.657108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.657128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.657233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.657254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.657355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.657376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.657532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.657552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.657656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.657676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.657779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.657799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.657909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.657931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.658129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.658152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.658327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.658349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.658528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.658550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.658654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.658677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.658775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.658796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.658880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.639 [2024-12-14 03:18:07.658902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.639 qpair failed and we were unable to recover it. 00:36:52.639 [2024-12-14 03:18:07.659055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.659078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.659194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.659216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.659305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.659335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.659532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.659555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.659653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.659676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.659786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.659808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.659984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.660006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.660161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.660183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.660292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.660325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.660580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.660602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.660780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.660802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.660895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.660915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.661073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.661094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.661182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.661203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.661370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.661393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.661624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.661646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.661757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.661779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.661949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.661971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.662066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.662086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.662173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.662194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.662286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.662306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.662545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.662567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.662719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.662741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.662904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.662927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.663029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.663050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.663232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.663255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.663511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.663538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.663717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.663739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.663949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.663972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.664085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.664108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.664344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.664366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.664564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.664586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.664791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.664824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.665011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.665044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.665281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.665325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.665528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.665561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.665753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.665775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.666100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.666122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.666284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.666305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.666501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.666533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.666762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.666795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.667002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.667044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.667330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.667353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.667520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.667542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.667650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.667671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.667871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.667893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.668074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.668097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.668350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.668373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.668547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.668569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.668730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.668751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.668876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.668898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.669143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.669165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.669262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.669284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.669451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.669478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.669706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.669728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.669970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.669992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.670199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.670220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.670384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.670407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.670596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.670619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.670846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.640 [2024-12-14 03:18:07.670868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.640 qpair failed and we were unable to recover it. 00:36:52.640 [2024-12-14 03:18:07.671054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.671077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.671353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.671376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.671626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.671648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.671824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.671846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.672080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.672113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.672336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.672369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.672655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.672687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.672841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.672864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.673039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.673078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.673355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.673390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.673588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.673620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.673902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.673925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.674100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.674123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.674362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.674385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.674514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.674536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.674803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.674825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.674935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.674957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.675189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.675222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.675468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.675501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.675705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.675738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.675954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.675993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.676183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.676208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.676393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.676417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.676540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.676563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.676674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.676696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.676874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.676896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.677077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.677099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.677213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.677235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.677488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.677512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.677789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.677812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.678036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.678057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.678155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.678175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.678430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.678453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.678626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.678649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.678838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.678861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.679053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.679084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.679221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.679253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.679496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.679530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.679739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.679761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.680050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.680082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.680344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.680377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.680575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.680608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.680859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.680881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.681137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.681160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.681268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.681290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.681584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.681619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.681818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.681850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.682129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.682163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.682405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.682438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.682563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.682596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.682788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.682824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.683153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.683185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.683464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.683498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.683768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.683801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.684051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.684073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.684296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.684328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.684464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.684487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.684650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.684672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.684950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.684983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.685113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.685145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.685372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.685405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.685637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.685670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.685805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.685837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.686033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.686066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.686200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.686222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.641 qpair failed and we were unable to recover it. 00:36:52.641 [2024-12-14 03:18:07.686449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.641 [2024-12-14 03:18:07.686473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.686703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.686726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.686851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.686874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.686988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.687010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.687293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.687333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.687597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.687622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.687737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.687762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.687995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.688018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.688250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.688272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.688451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.688474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.688606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.688629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.688757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.688779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.688941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.688963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.689250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.689273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.689511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.689534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.689709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.689731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.689864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.689886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.690074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.690096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.690351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.690375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.690560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.690583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.690816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.690839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.691060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.691082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.691333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.691356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.691487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.691513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.691649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.691672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.691797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.691819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.691985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.692007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.692202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.692224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.692355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.692378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.692579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.692602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.692712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.692734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.692921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.692943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.693213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.693236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.693354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.693375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.693605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.693627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.693801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.693823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.693911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.693932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.694108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.694130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.694232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.694255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.694378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.694403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.694574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.694599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.694713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.694734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.694864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.694887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.695121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.695143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.695263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.695286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.695485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.695509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.695631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.695652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.695755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.695775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.695865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.695887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.696000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.696021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.696149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.696180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.696304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.696350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.696555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.696577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.696745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.696767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.696876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.696898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.697020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.697042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.697155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.697180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.697366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.697389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.697654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.697687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.697806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.697839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.697955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.697987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.698099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.698141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.698228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.698249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.698369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.698393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.698497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.642 [2024-12-14 03:18:07.698520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.642 qpair failed and we were unable to recover it. 00:36:52.642 [2024-12-14 03:18:07.698694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.698716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.698821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.698843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.699074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.699096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.699293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.699324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.699412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.699433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.699640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.699662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.699748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.699768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.699867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.699890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.699998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.700018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.700109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.700131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.700301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.700345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.700566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.700588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.700680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.700701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.700901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.700923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.701032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.701055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.701175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.701198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.701366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.701392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.701486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.701509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.701610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.701633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.701792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.701815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.701927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.701948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.702135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.702157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.702251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.702273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.702392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.702415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.702511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.702531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.702630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.702651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.702745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.702768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.702928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.702950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.703073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.703096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.703191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.703214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.703331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.703353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.703517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.703540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.703714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.703737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.703913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.703936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.704048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.704070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.704160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.704181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.704276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.704298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.704420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.704443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.704620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.704644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.704749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.704772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.704887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.704912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.705147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.705169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.705275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.705297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.705496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.705519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.705606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.705627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.705740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.705762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.705854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.705876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.705978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.706000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.706180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.706202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.706307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.706340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.706500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.706522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.706627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.706650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.706757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.706779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.706882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.706907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.707024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.707048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.707150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.707173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.707272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.707293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.707406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.707430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.707537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.707559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.643 [2024-12-14 03:18:07.707674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.643 [2024-12-14 03:18:07.707697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.643 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.707793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.707814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.707912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.707934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.708033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.708056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.708145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.708167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.708331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.708356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.708515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.708538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.708644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.708666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.708908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.708931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.709037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.709058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.709171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.709194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.709373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.709396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.709486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.709508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.709677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.709699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.709870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.709892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.710092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.710115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.710219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.710243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.710351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.710375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.710480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.710503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.710666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.710689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.710798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.710821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.710933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.710960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.711064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.711086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.711188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.711210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.711330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.711354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.711526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.711549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.711667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.711689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.711791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.711814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.711907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.711929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.712026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.712048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.712151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.712174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.712337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.712361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.712450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.712472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.712562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.712585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.712676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.712699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.712794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.712817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.712982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.713005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.713093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.713116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.713298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.713332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.713426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.713449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.713703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.713726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.713849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.713871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.714061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.714102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.714297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.714358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.714489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.714523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.714633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.714665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.714848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.714882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.715092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.715125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.715305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.715342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.715431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.715454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.715557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.715580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.715742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.715765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.715863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.715887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.716008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.716031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.716135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.716157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.716394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.716418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.716580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.716602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.716704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.716726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.716954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.716977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.717146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.717171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.717333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.717356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.717471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.717493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.717604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.717627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.717714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.717736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.717904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.717926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.718029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.644 [2024-12-14 03:18:07.718052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.644 qpair failed and we were unable to recover it. 00:36:52.644 [2024-12-14 03:18:07.718155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.645 [2024-12-14 03:18:07.718177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.645 qpair failed and we were unable to recover it. 00:36:52.645 [2024-12-14 03:18:07.718281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.645 [2024-12-14 03:18:07.718303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.645 qpair failed and we were unable to recover it. 00:36:52.645 [2024-12-14 03:18:07.718483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.645 [2024-12-14 03:18:07.718506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.645 qpair failed and we were unable to recover it. 00:36:52.645 [2024-12-14 03:18:07.718671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.645 [2024-12-14 03:18:07.718693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.645 qpair failed and we were unable to recover it. 00:36:52.645 [2024-12-14 03:18:07.718781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.645 [2024-12-14 03:18:07.718803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.645 qpair failed and we were unable to recover it. 00:36:52.645 [2024-12-14 03:18:07.718899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.645 [2024-12-14 03:18:07.718921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.645 qpair failed and we were unable to recover it. 00:36:52.645 [2024-12-14 03:18:07.719018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.645 [2024-12-14 03:18:07.719040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.645 qpair failed and we were unable to recover it. 00:36:52.645 [2024-12-14 03:18:07.719204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.645 [2024-12-14 03:18:07.719226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.645 qpair failed and we were unable to recover it. 00:36:52.645 [2024-12-14 03:18:07.719335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.645 [2024-12-14 03:18:07.719358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.645 qpair failed and we were unable to recover it. 00:36:52.645 [2024-12-14 03:18:07.719455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.645 [2024-12-14 03:18:07.719477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.645 qpair failed and we were unable to recover it. 00:36:52.645 [2024-12-14 03:18:07.719640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.645 [2024-12-14 03:18:07.719662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.645 qpair failed and we were unable to recover it. 00:36:52.645 [2024-12-14 03:18:07.719770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.645 [2024-12-14 03:18:07.719792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.645 qpair failed and we were unable to recover it. 00:36:52.645 [2024-12-14 03:18:07.720046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.645 [2024-12-14 03:18:07.720069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.645 qpair failed and we were unable to recover it. 00:36:52.645 [2024-12-14 03:18:07.720160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.645 [2024-12-14 03:18:07.720182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.645 qpair failed and we were unable to recover it. 00:36:52.645 [2024-12-14 03:18:07.720370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.645 [2024-12-14 03:18:07.720394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.645 qpair failed and we were unable to recover it. 00:36:52.645 [2024-12-14 03:18:07.720585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.645 [2024-12-14 03:18:07.720608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.645 qpair failed and we were unable to recover it. 00:36:52.932 [2024-12-14 03:18:07.720766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.932 [2024-12-14 03:18:07.720788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.932 qpair failed and we were unable to recover it. 00:36:52.932 [2024-12-14 03:18:07.721038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.932 [2024-12-14 03:18:07.721060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.932 qpair failed and we were unable to recover it. 00:36:52.932 [2024-12-14 03:18:07.721174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.932 [2024-12-14 03:18:07.721196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.932 qpair failed and we were unable to recover it. 00:36:52.932 [2024-12-14 03:18:07.721328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.932 [2024-12-14 03:18:07.721351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.932 qpair failed and we were unable to recover it. 00:36:52.932 [2024-12-14 03:18:07.721450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.932 [2024-12-14 03:18:07.721472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.932 qpair failed and we were unable to recover it. 00:36:52.932 [2024-12-14 03:18:07.721657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.932 [2024-12-14 03:18:07.721680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.932 qpair failed and we were unable to recover it. 00:36:52.932 [2024-12-14 03:18:07.721791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.932 [2024-12-14 03:18:07.721813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.932 qpair failed and we were unable to recover it. 00:36:52.932 [2024-12-14 03:18:07.721917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.932 [2024-12-14 03:18:07.721940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.932 qpair failed and we were unable to recover it. 00:36:52.932 [2024-12-14 03:18:07.722110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.932 [2024-12-14 03:18:07.722133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.932 qpair failed and we were unable to recover it. 00:36:52.932 [2024-12-14 03:18:07.722232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.932 [2024-12-14 03:18:07.722254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.932 qpair failed and we were unable to recover it. 00:36:52.932 [2024-12-14 03:18:07.722418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.932 [2024-12-14 03:18:07.722442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.932 qpair failed and we were unable to recover it. 00:36:52.932 [2024-12-14 03:18:07.722567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.932 [2024-12-14 03:18:07.722590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.932 qpair failed and we were unable to recover it. 00:36:52.932 [2024-12-14 03:18:07.722685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.932 [2024-12-14 03:18:07.722706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.932 qpair failed and we were unable to recover it. 00:36:52.932 [2024-12-14 03:18:07.722811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.932 [2024-12-14 03:18:07.722833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.932 qpair failed and we were unable to recover it. 00:36:52.932 [2024-12-14 03:18:07.722990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.932 [2024-12-14 03:18:07.723013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.932 qpair failed and we were unable to recover it. 00:36:52.932 [2024-12-14 03:18:07.723170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.932 [2024-12-14 03:18:07.723192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.932 qpair failed and we were unable to recover it. 00:36:52.932 [2024-12-14 03:18:07.723370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.723394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.723619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.723641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.723819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.723842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.724020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.724042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.724131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.724153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.724326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.724350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.724455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.724478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.724637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.724659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.724835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.724858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.725086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.725108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.725307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.725340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.725518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.725542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.725650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.725673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.725853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.725876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.726083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.726105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.726210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.726233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.726403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.726427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.726530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.726553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.726651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.726678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.726766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.726788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.726982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.727004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.727172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.727195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.727368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.727391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.727478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.727501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.727665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.727687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.727772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.727794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.727883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.727905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.728010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.728032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.728134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.728156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.728408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.728432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.728588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.728611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.728726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.728748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.728854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.728876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.728987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.729010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.729177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.729200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.729327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.729351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.729443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.933 [2024-12-14 03:18:07.729464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.933 qpair failed and we were unable to recover it. 00:36:52.933 [2024-12-14 03:18:07.729506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4c70 (9): Bad file descriptor 00:36:52.934 [2024-12-14 03:18:07.729748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.729822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.730069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.730104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.730221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.730245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.730444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.730467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.730559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.730581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.730762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.730784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.730964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.730986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.731144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.731166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.731393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.731416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.731521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.731543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.731643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.731665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.731831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.731853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.731945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.731968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.732157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.732180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.732347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.732370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.732541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.732574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.732849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.732882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.733014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.733046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.733165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.733187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.733394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.733418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.733586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.733608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.733708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.733734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.733915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.733957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.734188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.734221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.734472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.734506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.734707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.734740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.734880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.734903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.735173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.735195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.735357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.735380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.735501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.735523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.735712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.735734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.735968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.735990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.736090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.736112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.736214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.736236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.736393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.736417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.736525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.736548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.934 [2024-12-14 03:18:07.736666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.934 [2024-12-14 03:18:07.736688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.934 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.736913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.736935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.737114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.737136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.737250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.737273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.737375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.737398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.737612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.737635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.737815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.737837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.738019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.738042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.738160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.738182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.738272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.738294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.738456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.738479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.738728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.738761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.738954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.738986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.739190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.739223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.739435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.739458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.739702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.739724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.739815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.739837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.740062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.740085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.740337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.740362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.740537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.740560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.740740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.740762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.740946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.740968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.741122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.741143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.741321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.741344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.741454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.741476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.741590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.741611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.741788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.741810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.741965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.741987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.742180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.742211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.742340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.742372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.742586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.742618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.742811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.742843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.743022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.743054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.743234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.743267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.743571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.743604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.743781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.743812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.743994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.744026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.744268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.744289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.744484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.935 [2024-12-14 03:18:07.744506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.935 qpair failed and we were unable to recover it. 00:36:52.935 [2024-12-14 03:18:07.744617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.744656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.744910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.744943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.745129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.745161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.745296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.745327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.745521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.745552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.745738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.745770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.745976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.746006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.746248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.746270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.746536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.746558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.746721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.746742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.746831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.746851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.746951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.746972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.747124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.747146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.747242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.747264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.747442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.747469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.747566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.747587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.747759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.747781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.748028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.748050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.748153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.748174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.748271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.748291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.748629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.748702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.748905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.748940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.749074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.749105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.749279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.749311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.749448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.749481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.749666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.749697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.749945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.749969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.750145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.750167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.750346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.750380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.750489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.750521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.750634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.750665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.750878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.750909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.751021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.751053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.751173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.751201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.751434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.751457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.751683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.751705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.751936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.751957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.936 qpair failed and we were unable to recover it. 00:36:52.936 [2024-12-14 03:18:07.752069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.936 [2024-12-14 03:18:07.752094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.752207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.752229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.752433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.752455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.752638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.752659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.752826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.752852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.753093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.753115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.753297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.753326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.753494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.753525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.753633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.753665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.753852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.753883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.754005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.754037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.754218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.754240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.754417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.754455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.754560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.754581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.754687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.754707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.754814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.754834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.755012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.755034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.755217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.755249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.755389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.755423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.755605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.755637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.755845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.755877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.756047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.756069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.756273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.756304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.756522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.756554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.756736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.756767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.756944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.756975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.757198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.757220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.757298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.757354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.757441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.757462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.757621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.757642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.757861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.757882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.758105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.758132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-14 03:18:07.758364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-14 03:18:07.758386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.758650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.758672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.758892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.758913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.759102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.759124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.759277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.759298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.759521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.759543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.759653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.759675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.759846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.759867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.760087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.760109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.760198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.760219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.760390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.760412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.760497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.760517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.760603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.760623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.760778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.760800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.760907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.760928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.761019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.761040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.761199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.761221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.761381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.761407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.761557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.761578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.761690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.761712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.761935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.761956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.762175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.762196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.762381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.762404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.762504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.762526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.762691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.762712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.762941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.762963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.763128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.763149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.763251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.763272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.763456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.763479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.763652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.763684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.763939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.763971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.764214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.764245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.764369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.764402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.764579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.764611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.764794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.764825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.765087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.765119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.765289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.765331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-14 03:18:07.765551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-14 03:18:07.765582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.765754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.765785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.765896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.765928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.766068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.766099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.766343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.766365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.766588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.766610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.766848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.766869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.766966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.766987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.767143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.767167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.767450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.767483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.767774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.767806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.767994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.768025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.768151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.768172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.768341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.768364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.768517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.768538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.768630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.768650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.768755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.768777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.768878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.768899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.769044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.769065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.769296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.769339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.769528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.769560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.769761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.769792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.769919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.769953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.770050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.770072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.770226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.770247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.770347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.770368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.770520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.770542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.770692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.770713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.770811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.770832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.770983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.771005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.771152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.771177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.771281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.771302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.771494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.771517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-14 03:18:07.771780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-14 03:18:07.771801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.771954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.771975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.772161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.772192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.772434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.772469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.772605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.772637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.772819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.772851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.773038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.773069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.773190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.773221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.773407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.773430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.773589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.773631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.773872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.773903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.774034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.774066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.774254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.774275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.774383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.774406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.774649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.774671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.774854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.774875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.774984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.775006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.775105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.775127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.775282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.775303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.775477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.775498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.775654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.775676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.775777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.775797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.776015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.776037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.776194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.776216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.776328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.776354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.776465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.776488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.776650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.776672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.776833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.776854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.776950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.776971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.777140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.777162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.777335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.777357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.777544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.777575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.777765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.777797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.777971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.778002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.778185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.778207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.778374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.778420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.778602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.778633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.778815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.778846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.779120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.779152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-14 03:18:07.779277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-14 03:18:07.779299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.779496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.779518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.779601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.779622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.779710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.779731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.779919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.779940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.780091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.780112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.780217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.780238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.780401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.780424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.780548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.780570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.780739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.780760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.780853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.780875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.781049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.781070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.781164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.781189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.781367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.781390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.781474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.781494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.781682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.781703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.781860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.781881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.782030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.782051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.782291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.782329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.782486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.782508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.782606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.782627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.782728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.782750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.782860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.782881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.783115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.783136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.783252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.783274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.783452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.783474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.783591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.783612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.783708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.783730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.783879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.783901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.784067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.784089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.784278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.784300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.784430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.784452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.784600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.784622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.784697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.784717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.784872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.784894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-14 03:18:07.784981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-14 03:18:07.785003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.785104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.785124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.785227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.785249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.785347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.785370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.785520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.785542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.785712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.785734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.785944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.785966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.786119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.786141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.786234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.786256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.786415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.786438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.786607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.786628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.786726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.786748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.786941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.786963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.787190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.787212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.787339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.787360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.787519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.787540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.787702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.787723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.787882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.787904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.788062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.788084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.788246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.788267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.788431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.788454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.788556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.788576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.788806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.788828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.788926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.788947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.789111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.789132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.789223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.789243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.789480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.789503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.789616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.789638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.789820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.789841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.790057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.790078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.790174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.790195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.790383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.790405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-14 03:18:07.790562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-14 03:18:07.790584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.790739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.790760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.790949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.790971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.791129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.791150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.791334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.791356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.791573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.791595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.791701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.791723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.791809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.791830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.791993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.792014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.792206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.792227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.792393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.792416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.792522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.792543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.792759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.792780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.793001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.793026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.793188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.793210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.793415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.793438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.793521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.793541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.793705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.793726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.793973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.793995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.794181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.794203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.794296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.794335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.794506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.794527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.794626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.794647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.794742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.794763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.794986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.795007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.795153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.795175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.795270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.795291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.795400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.795422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.795571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.795593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.795678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.795699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.795915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.795937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.796033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.796054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.796202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.796222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.796328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.796350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.796526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.796547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.796641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.796662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.796862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.796884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.797072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.797094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.797268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.797289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-14 03:18:07.797394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-14 03:18:07.797416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.797581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.797606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.797760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.797782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.798020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.798042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.798224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.798245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.798351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.798373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.798566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.798588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.798691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.798712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.798932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.798953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.799143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.799165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.799259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.799280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.799391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.799412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.799513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.799534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.799699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.799720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.799880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.799901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.800070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.800092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.800308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.800338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.800490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.800511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.800617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.800639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.800732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.800753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.801016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.801037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.801277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.801299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.801394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.801416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.801577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.801598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.801749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.801770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.801882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.801903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.802089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.802111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.802220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.802241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.802351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.802374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.802482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.802503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.802616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.802637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.802793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.802814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.802896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.802916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.803144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.803165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.803327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-14 03:18:07.803349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-14 03:18:07.803522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.803543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.803714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.803735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.803831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.803852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.803951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.803972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.804215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.804236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.804385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.804407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.804560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.804581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.804745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.804813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.805021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.805055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.805161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.805192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.805371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.805406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.805582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.805612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.805816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.805846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.806025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.806050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.806149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.806170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.806415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.806437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.806674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.806696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.806911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.806935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.807039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.807060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.807220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.807241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.807477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.807500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.807676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.807698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.807792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.807813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.807964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.807985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.808133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.808155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.808258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.808279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.808518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.808540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.808693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.808714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.808793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.808814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.808974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.808995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.809145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.809167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.809403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.809425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.809511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.809532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.809638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.809660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.809770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.809792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.809957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.809979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.810125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.810147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.810258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.810279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-14 03:18:07.810409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-14 03:18:07.810431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.810592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.810613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.810702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.810723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.810991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.811012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.811173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.811194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.811303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.811334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.811437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.811459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.811637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.811658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.811759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.811780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.811929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.811950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.812104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.812126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.812285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.812306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.812469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.812491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.812650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.812671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.812779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.812801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.812897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.812918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.813043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.813065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.813165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.813186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.813337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.813359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.813556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.813578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.813725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.813746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.813981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.814003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.814168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.814190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.814276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.814300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.814410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.814432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.814590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.814612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.814708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.814729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.814840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.814861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.815104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.815126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.815231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.815252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.815425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.815447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.815549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-14 03:18:07.815571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-14 03:18:07.815740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.815762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.815858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.815880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.816043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.816065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.816167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.816188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.816284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.816307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.816414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.816437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.816659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.816681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.816777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.816798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.816912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.816933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.817100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.817121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.817330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.817356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.817439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.817460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.817616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.817638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.817807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.817828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.817914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.817934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.818030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.818052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.818224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.818245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.818394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.818417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.818510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.818536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.818631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.818652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.818751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.818773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.818940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.818962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.819042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.819064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.819175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.819197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.819292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.819321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.819482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.819504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.819604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.819625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.819743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.819764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.819843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.819865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.819961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.819982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.820078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.820100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.820198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.820219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.820390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.820413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.820509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.820530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.820677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.820698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-14 03:18:07.820789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-14 03:18:07.820811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.820922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.820943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.821071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.821092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.821190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.821211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.821364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.821386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.821542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.821565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.821723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.821745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.821835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.821854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.822038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.822060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.822176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.822197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.822360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.822385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.822560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.822582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.822678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.822699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.822782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.822803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.822904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.822925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.823105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.823127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.823366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.823389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.823551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.823573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.823722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.823744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.823828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.823849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.824020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.824041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.824143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.824165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.824388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.824410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.824509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.824530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.824640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.824662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.824769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.824791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.825034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.825056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.825167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.825188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.825295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.825325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.825513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.825534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.825628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.825650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.825748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.825770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.825947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.825968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.826127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.826148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.826299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.826343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.826539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.826560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.826644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.826666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.826766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.826788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.827010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.827034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.827125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-14 03:18:07.827149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-14 03:18:07.827333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.827356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.827511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.827533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.827627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.827648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.827799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.827820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.827911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.827933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.828027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.828048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.828145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.828166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.828284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.828306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.828416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.828438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.828539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.828560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.828652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.828673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.828764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.828789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.828954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.828975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.829124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.829146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.829227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.829248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.829341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.829364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.829521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.829544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.829653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.829674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.829772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.829793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.829892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.829914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.830006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.830027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.830134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.830156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.830308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.830337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.830493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.830515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.830671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.830693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.830778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.830800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.830891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.830912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.830999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.831020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.831256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.831277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.831452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.831474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.831561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.831582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.831678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.831699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.831814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.831835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.831919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.831940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.832023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.832044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.832126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.832148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.832324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.832346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.832427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.832448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-14 03:18:07.832531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-14 03:18:07.832557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.832724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.832746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.832868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.832889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.833040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.833062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.833225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.833247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.833342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.833364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.833451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.833473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.833629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.833650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.833749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.833770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.833853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.833874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.834036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.834058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.834149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.834170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.834255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.834276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.834388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.834411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.834498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.834519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.834616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.834638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.834716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.834738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.834888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.834909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.835071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.835093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.835256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.835277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.835391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.835414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.835500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.835521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.835617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.835638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.835805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.835826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.835928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.835949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.836105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.836126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.836282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.836304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.836487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.836516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.836609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.836630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.836784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.836806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.836895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.836915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.837018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.837039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.837295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.837327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.837425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.837446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.837547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.837568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.837802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.837823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.837923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.837945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.838094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.838115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.838267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.838288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-14 03:18:07.838408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-14 03:18:07.838431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.838522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.838543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.838778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.838800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.838897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.838918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.839070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.839091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.839264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.839285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.839400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.839423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.839578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.839600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.839756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.839778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.839929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.839950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.840031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.840053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.840204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.840274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.840481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.840532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.840710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.840733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.840829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.840851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.841016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.841038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.841137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.841159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.841339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.841361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.841446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.841468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.841636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.841658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.841818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.841839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.841930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.841951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.842116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.842137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.842290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.842311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.842471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.842493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.842600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.842621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.842773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.842795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.842895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.842916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.843035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.843057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.843215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.843285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.843627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.843698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.843829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.843863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.843987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.844018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-14 03:18:07.844193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-14 03:18:07.844224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.844341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.844374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.844490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.844514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.844606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.844627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.844806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.844827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.845067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.845088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.845308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.845338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.845434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.845455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.845548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.845569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.845669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.845691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.845916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.845938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.846099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.846120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.846206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.846227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.846310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.846339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.846451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.846472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.846552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.846573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.846662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.846684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.846769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.846791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.846957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.846978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.847067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.847088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.847176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.847198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.847302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.847330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.847416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.847437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.847567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.847600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.847798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.847837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.847954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.847986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.848081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.848104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.848205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.848227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.848321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.848343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.848431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.848453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.848537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.848558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.848642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.848663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.848821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.848843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.848934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.848955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.849104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.849125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.849210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.849231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.849329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.849352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-14 03:18:07.849440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-14 03:18:07.849462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.849613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.849634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.849711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.849733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.849893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.849914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.850065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.850087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.850250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.850271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.850430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.850452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.850540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.850561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.850662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.850683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.850780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.850802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.850894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.850915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.851144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.851165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.851279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.851301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.851415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.851440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.851523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.851545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.851704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.851725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.851824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.851845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.852019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.852040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.852138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.852159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.852326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.852348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.852452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.852473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.852561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.852582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.852732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.852754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.852909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.852930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.853026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.853047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.853146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.853168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.853259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.853280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.853420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.853443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.853699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.853721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.853806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.853829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.853924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.853945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.854029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.854052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.854136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.854158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.854309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.854341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.854418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.854440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.854588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.854609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.854754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.854775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.854864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.854886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-14 03:18:07.855042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-14 03:18:07.855062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.855151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.855172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.855267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.855292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.855411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.855445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.855562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.855592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.855703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.855734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.855907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.855931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.856020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.856041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.856131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.856152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.856247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.856268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.856352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.856373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.856459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.856480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.856654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.856675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.856823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.856845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.857003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.857024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.857116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.857138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.857242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.857264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.857358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.857380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.857465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.857487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.857586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.857607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.857757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.857778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.857860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.857881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.857973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.857996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.858095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.858116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.858210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.858231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.858322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.858346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.858447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.858468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.858618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.858640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.858723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.858744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.858910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.858935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.859021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.859042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.859137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.859159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.859242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.859264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.859419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.859441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.859598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.859619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.859776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.859797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.859888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.859909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.859987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.860007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.860103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.860123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-14 03:18:07.860215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-14 03:18:07.860235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.860348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.860369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.860456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.860476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.860572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.860593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.860688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.860709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.860809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.860830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.860994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.861015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.861099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.861120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.861204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.861224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.861388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.861408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.861497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.861516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.861668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.861688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.861840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.861859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.862008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.862028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.862122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.862142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.862224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.862243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.862339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.862359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.862438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.862458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.862565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.862585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.862761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.862781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.862876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.862896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.862999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.863019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.863102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.863121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.863208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.863229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.863323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.863344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.863431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.863451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.863535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.863555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.863650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.863671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.863772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.863791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.863909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.863929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.864015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.864036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.864131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.864152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.864311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.864342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.864498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.864520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.864629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.864650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.864746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.864766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.864849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.864869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.864953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-14 03:18:07.864973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-14 03:18:07.865049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.865069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.865151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.865170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.865278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.865298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.865406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.865426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.865508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.865527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.865618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.865638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.865727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.865747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.865843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.865864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.865955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.865975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.866056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.866076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.866154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.866174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.866273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.866293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.866543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.866566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.866648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.866669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.866750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.866771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.866936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.866956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.867038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.867058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.867141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.867162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.867380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.867412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.867502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.867523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.867685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.867711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.867802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.867823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.867913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.867934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.868020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.868041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.868127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.868149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.868300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.868332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.868481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.868503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.868593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.868615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.868705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.868726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.868821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.868843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.868949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.868970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.869123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.869144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.869247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.869269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.869419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.869441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.869529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.869551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.869642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.956 [2024-12-14 03:18:07.869663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.956 qpair failed and we were unable to recover it. 00:36:52.956 [2024-12-14 03:18:07.869769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.869789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.869893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.869914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.870020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.870041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.870122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.870143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.870228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.870251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.870342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.870363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.870464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.870486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.870575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.870596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.870689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.870710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.870802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.870823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.870910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.870930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.871012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.871037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.871133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.871154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.871237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.871258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.871349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.871371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.871463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.871484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.871583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.871604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.871690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.871711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.871795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.871816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.871917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.871939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.872060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.872082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.872169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.872189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.872268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.872290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.872381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.872402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.872643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.872664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.872819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.872841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.872941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.872961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.873066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.873087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.873184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.873206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.873292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.873322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.873417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.873439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.873521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.873542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.873633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.873654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.873758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.873779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.873859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.873881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.873968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.873989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.874079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.874100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.874252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.874274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.874383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.957 [2024-12-14 03:18:07.874408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-14 03:18:07.874497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.874519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.874610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.874632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.874719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.874741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.874959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.874981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.875073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.875095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.875174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.875195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.875298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.875344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.875427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.875448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.875529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.875550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.875696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.875717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.875809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.875830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.875926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.875947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.876046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.876067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.876169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.876206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.876324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.876357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.876479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.876510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.876702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.876733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.876842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.876872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.876994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.877025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.877190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.877214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.877335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.877358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.877441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.877463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.877552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.877574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.877666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.877687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.877784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.877805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.877902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.877923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.878022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.878045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.878144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.878166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.878248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.878269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.878347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.878369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.878612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.878633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.878733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.878755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.878851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.878872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.878979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.879000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.879089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.879111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.879192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.879213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.881374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.881399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.881489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.881510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-14 03:18:07.881590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.958 [2024-12-14 03:18:07.881612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.881784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.881807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.881907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.881930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.882095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.882117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.882219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.882239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.882327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.882350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.882445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.882467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.882654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.882675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.882845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.882865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.882968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.882990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.883080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.883102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.883202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.883222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.883309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.883348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.883435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.883456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.883560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.883581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.883679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.883700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.883853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.883875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.884044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.884066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.884165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.884186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.884272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.884293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.884510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.884580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.884774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.884808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.884913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.884945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.885107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.885133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.885253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.885275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.885394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.885416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.885510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.885532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.885623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.885644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.885809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.885830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.885942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.885977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.886259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.886290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.886412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.886445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.886548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.886573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.886726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.886747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.886841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.886862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.887028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.887048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.887147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.959 [2024-12-14 03:18:07.887168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-14 03:18:07.887251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.887272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.887357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.887381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.887531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.887552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.887720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.887741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.887892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.887914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.888008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.888030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.888115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.888137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.888245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.888267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.888353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.888376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.888463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.888485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.888581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.888602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.888754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.888775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.888868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.888891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.889059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.889081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.889167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.889189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.889274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.889296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.889386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.889408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.889571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.889593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.889676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.889697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.889786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.889812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.889918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.889940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.890033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.890054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.890143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.890164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.890258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.890279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.890365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.890388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.890480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.890501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.890664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.890686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.890856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.890878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.890967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.890988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.891081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.891103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.891205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.891227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.960 [2024-12-14 03:18:07.891333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.960 [2024-12-14 03:18:07.891356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.960 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.891438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.891460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.891613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.891635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.891718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.891739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.891846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.891868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.891958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.891980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.892063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.892085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.892302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.892331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.892428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.892449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.892532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.892553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.892652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.892673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.892753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.892774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.892866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.892887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.892974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.892996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.893083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.893105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.893192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.893218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.893306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.893336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.893552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.893574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.893672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.893693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.893844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.893866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.893944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.893965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.894045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.894066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.894164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.894186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.894266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.894287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.894406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.894429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.894514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.894536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.894614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.894635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.894850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.894871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.894980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.895002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.895113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.895134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.895220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.895242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.895401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.895423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.895511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.895532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.895617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.895640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.895724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.895747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.895831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.895852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.895938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.895960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.896111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.896133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.896241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.896262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.961 [2024-12-14 03:18:07.896367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.961 [2024-12-14 03:18:07.896390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.961 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.896607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.896629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.896713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.896734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.896830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.896856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.896945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.896967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.897057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.897078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.897165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.897187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.897270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.897292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.897397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.897419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.897499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.897520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.897598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.897618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.897704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.897726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.897800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.897823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.897930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.897952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.898042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.898064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.898152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.898173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.898276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.898298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.898405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.898427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.898515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.898536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.898636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.898657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.898752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.898773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.898855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.898878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.898962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.898984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.899065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.899087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.899186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.899207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.899291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.899321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.899409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.899430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.899514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.899534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.899692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.899714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.899797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.899819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.899904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.899925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.900029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.900050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.900133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.900155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.900245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.900267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.900356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.900379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.900473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.900495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.900578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.900601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.900702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.900724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.900814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.900836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.962 [2024-12-14 03:18:07.900984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.962 [2024-12-14 03:18:07.901006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.962 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.901088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.901109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.901258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.901280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.901378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.901401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.901483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.901504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.901609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.901630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.901707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.901728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.901806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.901827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.901911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.901932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.902013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.902034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.902195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.902216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.902298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.902336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.902486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.902508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.902605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.902627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.902789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.902810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.902897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.902918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.903003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.903025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.903118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.903139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.903239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.903260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.903367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.903390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.903609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.903630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.903730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.903751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.903845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.903867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.903959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.903981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.904077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.904098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.904181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.904203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.904296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.904327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.904414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.904435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.904524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.904546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.904634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.904656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.904759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.904781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.904865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.904887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.905052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.905077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.905166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.905187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.905356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.905379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.905477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.905498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.905585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.905607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.905688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.905709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.905858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.905881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.963 [2024-12-14 03:18:07.905980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.963 [2024-12-14 03:18:07.906002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.963 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.906086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.906107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.906199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.906221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.906371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.906393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.906545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.906567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.906652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.906673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.906770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.906791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.906879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.906901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.907052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.907074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.907158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.907179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.907266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.907287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.907458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.907480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.907630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.907651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.907741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.907763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.907859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.907881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.907986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.908007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.908090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.908112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.908268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.908290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.908390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.908412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.908507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.908528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.908703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.908730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.908829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.908851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.909001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.909023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.909112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.909134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.909286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.909307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.909412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.909434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.909531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.909552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.909636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.909660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.909751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.909773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.909868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.909889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.909975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.909996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.910080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.910101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.910190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.910213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.910378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.910402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.910572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.910594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.910683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.910705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.910798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.910819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.910909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.910930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.911077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.911099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.964 [2024-12-14 03:18:07.911192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.964 [2024-12-14 03:18:07.911213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.964 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.911377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.911400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.911564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.911586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.911665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.911686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.911833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.911855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.911953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.911974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.912069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.912091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.912177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.912198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.912284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.912306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.912470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.912492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.912642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.912664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.912752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.912774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.912858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.912880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.912961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.912982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.913074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.913095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.913175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.913197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.913282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.913303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.913406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.913427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.913508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.913529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.913716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.913738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.913833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.913854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.914001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.914022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.914108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.914130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.914296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.914327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.914430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.914451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.914605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.914627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.914711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.914732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.914958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.914979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.915060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.915081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.915190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.915211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.915299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.915329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.915415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.915436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.915517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.915538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.915627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.915648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.915727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.965 [2024-12-14 03:18:07.915747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.965 qpair failed and we were unable to recover it. 00:36:52.965 [2024-12-14 03:18:07.915835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.915856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.915946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.915967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.916120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.916141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.916220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.916241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.916334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.916357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.916441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.916462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.916553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.916574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.916740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.916761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.916910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.916931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.917092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.917114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.917196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.917218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.917406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.917430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.917539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.917561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.917661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.917682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.917763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.917788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.917880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.917902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.918055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.918077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.918156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.918178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.918278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.918300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.918501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.918523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.918741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.918762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.918914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.918934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.919021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.919042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.919136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.919157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.919326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.919348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.919452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.919474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.919557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.919579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.919672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.919693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.919917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.919939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.920046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.920067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.920161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.920183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.920348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.920370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.920467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.920490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.920644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.920666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.920747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.920768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.920855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.920876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.920958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.920980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.921074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.921095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.921176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.966 [2024-12-14 03:18:07.921198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.966 qpair failed and we were unable to recover it. 00:36:52.966 [2024-12-14 03:18:07.921296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.921326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.921416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.921437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.921520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.921549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.921724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.921744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.921842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.921863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.921949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.921970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.922079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.922100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.922179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.922200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.922295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.922352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.922435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.922457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.922545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.922566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.922678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.922699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.922809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.922831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.922919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.922940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.923089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.923111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.923222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.923244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.923411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.923434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.923519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.923540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.923642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.923663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.923879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.923901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.923981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.924002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.924155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.924177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.924347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.924369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.924471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.924493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.924675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.924696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.924783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.924805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.924885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.924906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.925016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.925038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.925130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.925151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.925254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.925279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.925389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.925412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.925501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.925523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.925673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.925694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.925777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.925798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.925878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.925900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.926055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.926076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.926166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.926186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.926348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.926371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.926522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.967 [2024-12-14 03:18:07.926543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.967 qpair failed and we were unable to recover it. 00:36:52.967 [2024-12-14 03:18:07.926647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.926668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.926755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.926777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.926889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.926910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.926996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.927017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.927166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.927235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.927407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.927475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.927666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.927700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.927797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.927820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.927914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.927935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.928087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.928109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.928202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.928222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.928311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.928342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.928441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.928462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.928654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.928692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.928838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.928875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.929066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.929098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.929286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.929325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.929444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.929485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.929686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.929716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.929816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.929839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.929928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.929949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.930113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.930134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.930214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.930235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.930341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.930364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.930520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.930543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.930631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.930652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.930861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.930882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.931063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.931085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.931177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.931199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.931282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.931304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.931464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.931486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.931588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.931622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.931822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.931858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.932004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.932035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.932134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.932167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.932440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.932475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.932656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.932687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.932919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.932954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.933062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.968 [2024-12-14 03:18:07.933094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.968 qpair failed and we were unable to recover it. 00:36:52.968 [2024-12-14 03:18:07.933372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.933405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.933507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.933529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.933626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.933648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.933750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.933770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.933859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.933879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.933960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.933985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.934066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.934085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.934200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.934222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.934303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.934334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.934610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.934642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.934774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.934805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.934969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.935001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.935160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.935191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.935357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.935379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.935468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.935489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.935573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.935593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.935766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.935788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.935939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.935961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.936045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.936066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.936237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.936272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.936428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.936461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.936575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.936606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.936770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.936802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.936918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.936948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.937056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.937087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.937192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.937215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.937389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.937413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.937603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.937625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.937708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.937728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.937875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.937897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.937997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.938017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.938100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.938120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.938279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.938301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.969 [2024-12-14 03:18:07.938421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.969 [2024-12-14 03:18:07.938442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.969 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.938684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.938706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.938858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.938880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.938970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.938990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.939088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.939109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.939261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.939282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.939400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.939422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.939574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.939595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.939744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.939765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.939850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.939870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.940004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.940026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.940247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.940278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.940482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.940514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.940650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.940684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.940816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.940847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.940949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.940978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.941105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.941137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.941235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.941266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.941402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.941432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.941694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.941719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.941822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.941847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.941996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.942018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.942115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.942137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.942336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.942368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.942557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.942589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.942705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.942736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.942869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.942899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.943151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.943186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.943309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.943352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.943475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.943505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.943686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.943716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.943846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.943877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.943989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.944021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.944144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.944175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.944369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.944392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.944637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.944668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.944859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.944891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.945092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.945124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.945230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.945262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.970 [2024-12-14 03:18:07.945455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.970 [2024-12-14 03:18:07.945488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.970 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.945733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.945754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.945849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.945870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.945948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.945969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.946146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.946167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.946337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.946361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.946461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.946482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.946569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.946590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.946695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.946717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.946895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.946916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.947015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.947036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.947141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.947163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.947245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.947265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.947351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.947372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.947545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.947567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.947675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.947697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.947779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.947801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.947949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.947972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.948070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.948092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.948171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.948191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.948281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.948303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.948462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.948484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.948710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.948731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.948900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.948922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.949086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.949108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.949207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.949229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.949403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.949436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.949599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.949622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.949793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.949829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.950024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.950054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.950227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.950259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.950362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.950386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.950499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.950521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.950613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.950634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.950779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.950800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.951015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.951037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.951205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.951226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.951325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.951347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.951504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.951533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.971 [2024-12-14 03:18:07.951632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.971 [2024-12-14 03:18:07.951656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.971 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.951750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.951772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.951984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.952006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.952164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.952185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.952343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.952366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.952537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.952559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.952651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.952671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.952862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.952883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.953032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.953066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.953158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.953178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.953259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.953281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.953386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.953407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.953556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.953578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.953678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.953699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.953853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.953875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.953971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.953993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.954148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.954173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.954283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.954304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.954565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.954587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.954703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.954725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.954827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.954848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.954942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.954963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.955120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.955141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.955288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.955309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.955466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.955488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.955678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.955709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.955972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.956003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.956253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.956284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.956394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.956414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.956641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.956662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.956828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.956850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.957027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.957058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.957240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.957271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.957490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.957523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.957700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.957722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.957941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.957971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.958091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.958122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.958235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.958266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.958427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.958450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.958699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.972 [2024-12-14 03:18:07.958721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.972 qpair failed and we were unable to recover it. 00:36:52.972 [2024-12-14 03:18:07.958812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.958833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.959008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.959030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.959248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.959269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.959350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.959375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.959476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.959498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.959667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.959688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.959780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.959800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.959900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.959921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.960141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.960163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.960264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.960285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.960471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.960493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.960588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.960609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.960689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.960710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.960872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.960893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.961054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.961075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.961222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.961244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.961400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.961421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.961526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.961547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.961707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.961729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.961836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.961857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.961951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.961972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.962190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.962212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.962323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.962345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.962508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.962529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.962713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.962745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.962985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.963017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.963190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.963220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.963391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.963425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.963610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.963642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.963829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.963850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.964004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.964029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.964125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.964145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.964223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.964248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.964417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.964439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.964538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.964558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.964647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.964667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.964836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.964857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.965008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.965030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.973 qpair failed and we were unable to recover it. 00:36:52.973 [2024-12-14 03:18:07.965189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.973 [2024-12-14 03:18:07.965210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.965448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.965469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.965628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.965649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.965821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.965842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.965952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.965973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.966057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.966077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.966180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.966200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.966346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.966368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.966446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.966466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.966566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.966586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.966753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.966774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.966887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.966908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.966996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.967016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.967177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.967198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.967301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.967331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.967414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.967434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.967536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.967558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.967670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.967692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.967797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.967817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.967905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.967925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.968013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.968034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.968123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.968143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.968237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.968256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.968481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.968504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.968598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.968618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.968800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.968821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.968917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.968937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.969026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.969046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.969207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.969228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.969462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.969485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.969704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.969726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.969813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.969833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.969930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.974 [2024-12-14 03:18:07.969951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.974 qpair failed and we were unable to recover it. 00:36:52.974 [2024-12-14 03:18:07.970051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.970071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.970159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.970179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.970272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.970292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.970395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.970417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.970503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.970522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.970685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.970706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.970811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.970831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.970982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.971004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.971111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.971132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.971305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.971334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.971509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.971531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.971770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.971801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.972023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.972054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.972180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.972211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.972386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.972421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.972528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.972560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.972741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.972773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.972944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.972965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.973130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.973151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.973246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.973268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.973366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.973386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.973492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.973512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.973614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.973635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.973721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.973741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.973928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.973949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.974122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.974153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.974416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.974449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.974569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.974606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.974722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.974754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.974922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.974944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.975056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.975077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.975327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.975349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.975444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.975466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.975650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.975682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.975862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.975893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.975999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.976029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.976131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.976160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.976335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.976368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.976553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.975 [2024-12-14 03:18:07.976584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.975 qpair failed and we were unable to recover it. 00:36:52.975 [2024-12-14 03:18:07.976683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.976713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.976910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.976931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.977054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.977076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.977168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.977191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.977435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.977467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.977640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.977672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.977842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.977873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.978062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.978084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.978181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.978202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.978352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.978374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.978471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.978492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.978656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.978677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.978848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.978880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.979075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.979107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.979279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.979310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.979443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.979468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.979628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.979649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.979883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.979915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.980027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.980058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.980175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.980206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.980383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.980417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.980610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.980642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.980749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.980780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.980949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.980970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.981056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.981076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.981165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.981186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.981281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.981302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.981468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.981489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.981639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.981660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.981885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.981907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.982017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.982039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.982232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.982254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.982506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.982529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.982639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.982661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.982764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.982785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.982881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.982902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.983086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.983107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.983278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.976 [2024-12-14 03:18:07.983299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.976 qpair failed and we were unable to recover it. 00:36:52.976 [2024-12-14 03:18:07.983534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.983557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.983652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.983673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.983756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.983777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.983870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.983891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.984052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.984073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.984228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.984260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.984447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.984481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.984592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.984623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.984816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.984847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.985019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.985040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.985255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.985276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.985401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.985424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.985591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.985613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.985838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.985860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.986117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.986138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.986233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.986254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.986470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.986494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.986580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.986601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.986689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.986710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.986812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.986833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.987080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.987101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.987296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.987323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.987435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.987458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.987559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.987581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.987750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.987771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.987929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.987951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.988127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.988148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.988318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.988340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.988533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.988555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.988641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.988662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.988775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.988796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.988954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.988975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.989149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.989171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.989335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.989358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.989507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.989547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.989674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.989706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.989890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.989922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.990042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.990073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.990244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.990276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.990522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.990555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.990723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.990761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.990860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.977 [2024-12-14 03:18:07.990881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.977 qpair failed and we were unable to recover it. 00:36:52.977 [2024-12-14 03:18:07.990996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.991017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.991155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.991176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.991395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.991417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.991593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.991635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.991744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.991774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.991891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.991922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.992123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.992154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.992350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.992382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.992506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.992538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.992724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.992745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.992864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.992885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.993049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.993070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.993155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.993176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.993280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.993302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.993464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.993486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.993630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.993652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.993837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.993858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.994032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.994054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.994137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.994160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.994261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.994282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.994379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.994401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.994505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.994527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.994610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.994631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.994780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.994801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.994921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.994942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.995031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.995052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.995196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.995217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.995388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.995411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.995567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.995588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.995685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.995706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.995860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.995885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.996041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.996063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.996157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.996178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.996374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.996396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.996506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.996530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.996624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.996645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.996743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.978 [2024-12-14 03:18:07.996765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.978 qpair failed and we were unable to recover it. 00:36:52.978 [2024-12-14 03:18:07.996991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:07.997013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:07.997105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:07.997126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:07.997293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:07.997321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:07.997417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:07.997438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:07.997541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:07.997563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:07.997656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:07.997676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:07.997780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:07.997801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:07.997908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:07.997929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:07.998029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:07.998051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:07.998138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:07.998160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:07.998332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:07.998354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:07.998445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:07.998467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:07.998619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:07.998641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:07.998732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:07.998753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:07.998835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:07.998857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:07.999025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:07.999046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:07.999218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:07.999239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:07.999341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:07.999364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:07.999461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:07.999482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:07.999574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:07.999595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:07.999707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:07.999732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:07.999884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:07.999906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:08.000061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:08.000082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:08.000233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:08.000254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:08.000404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:08.000426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:08.000524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:08.000545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:08.000645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:08.000666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:08.000833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.979 [2024-12-14 03:18:08.000855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.979 qpair failed and we were unable to recover it. 00:36:52.979 [2024-12-14 03:18:08.001006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.001027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.001123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.001144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.001323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.001345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.001457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.001479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.001733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.001765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.001893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.001925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.002044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.002076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.002268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.002300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.002422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.002454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.002591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.002622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.002745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.002776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.002880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.002900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.003050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.003071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.003156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.003177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.003274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.003295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.003410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.003432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.003534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.003555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.003646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.003667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.003754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.003775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.003854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.003876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.003984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.004006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.004171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.004193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.004289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.004311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.004411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.004432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.004518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.004541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.004622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.004643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.004723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.004746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.004855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.004877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.004979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.005000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.005152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.005172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.005270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.005291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.005379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.005402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.005487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.005508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.005591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.005615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.005790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.005820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.005937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.005969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.006147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.006178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.006292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.006336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.006442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.980 [2024-12-14 03:18:08.006484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.980 qpair failed and we were unable to recover it. 00:36:52.980 [2024-12-14 03:18:08.006568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.006589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.006683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.006704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.006821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.006844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.006944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.006966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.007131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.007152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.007254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.007275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.007391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.007415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.007507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.007528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.007688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.007710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.007804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.007826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.007987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.008008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.008098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.008118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.008210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.008232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.008328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.008350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.008444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.008466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.008669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.008691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.008845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.008866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.008974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.008995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.009086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.009107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.009209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.009230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.009321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.009343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.009494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.009519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.009671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.009692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.009851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.009894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.010010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.010041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.010241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.010271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.010402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.010436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.010601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.010622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.010775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.010818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.010941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.010973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.011073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.011104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.011206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.011238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.011348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.011380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.011565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.011588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.011686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.011707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.011896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.011919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.012026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.012047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.981 [2024-12-14 03:18:08.012130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.981 [2024-12-14 03:18:08.012151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.981 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.012253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.012274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.012429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.012451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.012554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.012575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.012677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.012698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.012787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.012808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.012918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.012939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.013029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.013050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.013148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.013169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.013263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.013284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.013394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.013415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.013561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.013604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.013711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.013743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.013849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.013881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.013992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.014023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.014151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.014183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.014286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.014327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.014502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.014533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.014711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.014742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.014848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.014868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.015017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.015039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.015139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.015160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.015271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.015292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.015524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.015593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.015738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.015774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.015890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.015922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.016030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.016061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.016165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.016196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.016301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.016352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.016451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.016475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.016630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.016652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.016755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.016779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.016875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.016896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.016979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.017001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.017083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.017103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.017192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.017211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.017306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.017335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.017439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.017461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.017579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.017611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.017707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.982 [2024-12-14 03:18:08.017728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.982 qpair failed and we were unable to recover it. 00:36:52.982 [2024-12-14 03:18:08.017816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.017836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.017916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.017936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.018026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.018046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.018139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.018163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.018245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.018266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.018353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.018374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.018475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.018495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.018579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.018599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.018699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.018720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.018796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.018815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.018924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.018945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.019024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.019044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.019136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.019157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.019239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.019261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.019479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.019501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.019582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.019601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.019750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.019771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.019919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.019941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.020051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.020072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.020152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.020172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.020278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.020299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.020400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.020421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.020513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.020535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.020615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.020635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.020729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.020748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.020837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.020858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.020960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.020982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.021068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.021088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.021201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.021222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.021372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.021395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.021546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.021568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.021653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.021672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.983 [2024-12-14 03:18:08.021753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.983 [2024-12-14 03:18:08.021774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.983 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.021918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.021940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.022034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.022055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.022138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.022160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.022243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.022262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.022348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.022369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.022452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.022472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.022566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.022587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.022698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.022720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.022797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.022817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.023032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.023053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.023134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.023155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.023234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.023255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.023363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.023385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.023496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.023517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.023602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.023624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.023702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.023723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.023819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.023841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.023936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.023957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 391343 Killed "${NVMF_APP[@]}" "$@" 00:36:52.984 [2024-12-14 03:18:08.024037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.024063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.024217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.024242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.024328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.024350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.024455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.024476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.024628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.024649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.024732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.024753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:52.984 [2024-12-14 03:18:08.024851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.024873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.024958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.024979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.025079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.025100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:52.984 [2024-12-14 03:18:08.025253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.025275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.025372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.025394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.025474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.025495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:52.984 [2024-12-14 03:18:08.025579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.025600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.025693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.025718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.025819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:52.984 [2024-12-14 03:18:08.025840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.025930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.025951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 [2024-12-14 03:18:08.026031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.984 [2024-12-14 03:18:08.026053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.984 qpair failed and we were unable to recover it. 00:36:52.984 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:52.984 [2024-12-14 03:18:08.026136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.026157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.026238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.026258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.026343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.026365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.026520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.026541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.026628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.026649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.026750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.026771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.026856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.026878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.027046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.027067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.027159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.027181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.027268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.027291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.027420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.027465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.027675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.027709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.027825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.027855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.027977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.028008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.028114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.028143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.028251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.028281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.028386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.028411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.028587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.028609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.028702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.028724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.028813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.028834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.029084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.029105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.029185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.029207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.029350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.029371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.029471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.029493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.029591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.029613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.029703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.029724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.029814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.029834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.029936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.029957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.030108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.030128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.030231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.030252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.030405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.030429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.030522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.030543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.030625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.030646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.030734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.030754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.030840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.030861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.031036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.031057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.031211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.031279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.031426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.031461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.985 [2024-12-14 03:18:08.031645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.985 [2024-12-14 03:18:08.031676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.985 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.031785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.031808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.031908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.031929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.032148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.032169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.032253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.032273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.032381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.032402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.032557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.032578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.032678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.032699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.032810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.032831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.032980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.033001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.986 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=391786 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.033109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.033143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.033273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.033310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 391786 00:36:52.986 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:52.986 [2024-12-14 03:18:08.033505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.033538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.033709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.033741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 391786 ']' 00:36:52.986 [2024-12-14 03:18:08.033865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.033896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.034065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:52.986 [2024-12-14 03:18:08.034096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.034220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.034251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.034369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.034402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.986 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.034507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.034537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.034659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.034690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:52.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:52.986 [2024-12-14 03:18:08.034799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.034831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.034975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:52.986 [2024-12-14 03:18:08.035015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.035186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.035211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:52.986 [2024-12-14 03:18:08.035356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.035379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.035473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.035495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.035646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.035668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.035766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.035788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.035962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.035984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.036143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.036163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.036353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.036378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.036476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.036498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.036710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.036731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.986 [2024-12-14 03:18:08.036915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.986 [2024-12-14 03:18:08.036937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.986 qpair failed and we were unable to recover it. 00:36:52.987 [2024-12-14 03:18:08.037032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.987 [2024-12-14 03:18:08.037053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.987 qpair failed and we were unable to recover it. 00:36:52.987 [2024-12-14 03:18:08.037273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.987 [2024-12-14 03:18:08.037295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:52.987 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.037471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.037493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.037672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.037694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.037802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.037824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.037908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.037930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.038025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.038047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.038149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.038171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.038326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.038348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.038567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.038589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.038679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.038701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.038862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.038884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.038970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.038992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.039085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.039110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.039217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.039244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.039394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.039417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.039509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.039532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.039757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.039778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.039900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.039921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.040099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.040123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.040370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.040393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.040559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.040581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.040771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.040793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.040890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.040911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.041082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.041104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.041327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.041349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.041515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.041537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.041621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.041642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.041802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.041824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.041921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.041943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.272 [2024-12-14 03:18:08.042042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.272 [2024-12-14 03:18:08.042063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.272 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.042176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.042199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.042309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.042339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.042427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.042448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.042541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.042563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.042740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.042762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.042866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.042887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.043058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.043079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.043232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.043254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.043410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.043432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.043541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.043562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.043719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.043745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.043834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.043855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.043968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.043989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.044151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.044172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.044264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.044285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.044449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.044472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.044564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.044586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.044739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.044760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.044915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.044937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.045034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.045056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.045153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.045175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.045264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.045286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.045390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.045414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.045501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.045523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.045631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.045654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.045744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.045766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.045876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.045897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.046061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.046084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.046187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.046211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.046298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.046326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.046414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.046435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.046535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.046557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.046639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.046661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.046748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.046769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.046922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.046945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.047046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.047066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.047152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.047173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.047262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.047283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.273 qpair failed and we were unable to recover it. 00:36:53.273 [2024-12-14 03:18:08.047465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.273 [2024-12-14 03:18:08.047488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.047586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.047609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.047702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.047723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.047874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.047898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.048046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.048066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.048158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.048179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.048267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.048288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.048457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.048479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.048626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.048647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.048730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.048752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.048842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.048863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.048974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.048994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.049085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.049106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.049283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.049307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.049475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.049497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.049735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.049756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.049857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.049879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.049973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.049994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.050141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.050165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.050259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.050280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.050480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.050504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.050657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.050681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.050779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.050800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.050882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.050903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.050999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.051021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.051108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.051128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.051219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.051241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.051443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.051465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.051561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.051582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.051666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.051687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.051833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.051854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.052004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.052025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.052125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.052146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.052310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.052341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.052489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.052511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.052608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.052630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.052724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.052745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.052904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.052925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.053008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.053028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.274 qpair failed and we were unable to recover it. 00:36:53.274 [2024-12-14 03:18:08.053246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.274 [2024-12-14 03:18:08.053267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.053369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.053397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.053550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.053571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.053717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.053741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.053848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.053870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.053964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.053986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.054077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.054100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.054187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.054208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.054298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.054334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.054491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.054512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.054718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.054740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.054825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.054846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.055010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.055031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.055110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.055131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.055215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.055235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.055393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.055415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.055582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.055604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.055688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.055710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.055867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.055888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.056036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.056056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.056221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.056242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.056337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.056360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.056512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.056535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.056646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.056667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.056816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.056837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.056930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.056952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.057040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.057061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.057158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.057179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.057280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.057306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.057402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.057422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.057514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.057535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.057666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.057688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.057857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.057878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.058028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.058050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.058146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.058167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.058252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.058274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.058362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.058384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.058489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.058510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.058753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.058773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.275 [2024-12-14 03:18:08.058867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.275 [2024-12-14 03:18:08.058887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.275 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.059036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.059057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.059177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.059198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.059284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.059306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.059477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.059498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.059642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.059664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.059762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.059783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.059941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.059962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.060220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.060240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.060340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.060362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.060464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.060486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.060587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.060608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.060687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.060709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.060791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.060812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.060896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.060917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.061134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.061156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.061266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.061287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.061536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.061606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.061756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.061795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.062057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.062088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.062191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.062222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.062395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.062428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.062677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.062708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.062840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.062865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.062976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.062999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.063126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.063147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.063327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.063349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.063429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.063451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.063546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.063567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.063663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.063685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.063865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.063887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.063973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.063994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.064234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.064255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.064416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.276 [2024-12-14 03:18:08.064438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.276 qpair failed and we were unable to recover it. 00:36:53.276 [2024-12-14 03:18:08.064600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.064621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.064738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.064759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.064851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.064872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.065030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.065052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.065152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.065173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.065328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.065350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.065514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.065536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.065763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.065784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.065946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.065967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.066079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.066101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.066209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.066230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.066390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.066412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.066500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.066521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.066615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.066636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.066726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.066747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.066962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.066984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.067183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.067204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.067355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.067378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.067597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.067619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.067794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.067818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.067978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.067999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.068159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.068180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.068339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.068361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.068555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.068594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.068724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.068756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.068956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.068987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.069151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.069173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.069285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.069306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.069420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.069441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.069524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.069546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.069694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.069715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.069955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.069976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.070071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.070092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.070236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.070258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.070439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.070460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.070577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.070598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.070696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.070717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.070877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.070899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.071050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.277 [2024-12-14 03:18:08.071071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.277 qpair failed and we were unable to recover it. 00:36:53.277 [2024-12-14 03:18:08.071151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.071172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.071272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.071292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.071494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.071516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.071667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.071688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.071783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.071805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.071892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.071913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.072007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.072028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.072184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.072205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.072378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.072400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.072481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.072502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.072681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.072702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.072918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.072944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.073040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.073062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.073172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.073193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.073338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.073360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.073460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.073481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.073580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.073601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.073685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.073706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.073933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.073954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.074121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.074142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.074237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.074257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.074446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.074468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.074554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.074576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.074745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.074766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.074864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.074886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.075041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.075062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.075210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.075231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.075382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.075404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.075556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.075578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.075731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.075751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.075964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.075985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.076066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.076087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.076182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.076203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.076297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.076326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.076425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.076446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.076624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.076645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.076835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.076856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.077093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.077115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.278 [2024-12-14 03:18:08.077297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.278 [2024-12-14 03:18:08.077331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.278 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.077485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.077506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.077615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.077636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.077740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.077761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.077919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.077941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.078105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.078126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.078226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.078247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.078425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.078448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.078534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.078555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.078706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.078728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.078807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.078827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.078939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.078960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.079111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.079132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.079229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.079251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.079430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.079452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.079544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.079565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.079662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.079683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.079781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.079801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.079892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.079922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.080089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.080111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.080193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.080215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.080371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.080393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.080474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.080495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.080577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.080598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.080709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.080730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.080896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.080917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.081077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.081098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.081251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.081276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.081458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.081480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.081655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.081676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.081831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.081852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.081944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.081965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.082117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.082138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.082231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.082252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.082342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.082365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.082444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.082465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.082645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.082667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.082845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.082866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.082962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.082983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.083062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.279 [2024-12-14 03:18:08.083083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.279 qpair failed and we were unable to recover it. 00:36:53.279 [2024-12-14 03:18:08.083232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.083253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.083346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.083368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.083606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.083627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.083720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.083741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.083840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.083861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.083959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.083980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.084125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.084147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.084226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.084247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.084352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.084374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.084567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.084588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.084817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.084838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.084993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.085014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.085193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.085216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.085365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.085387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.085499] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:53.280 [2024-12-14 03:18:08.085540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.085552] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:53.280 [2024-12-14 03:18:08.085568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.085720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.085742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.085902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.085923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.086107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.086127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.086252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.086274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.086519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.086543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.086711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.086733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.086960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.086982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.087139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.087162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.087346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.087369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.087488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.087513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.087702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.087727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.087946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.087971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.088069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.088092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.088240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.088265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.088378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.088402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.088587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.088612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.088779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.088805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.088971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.088996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.089092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.089115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.089268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.089292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.089412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.089434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.089527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.089551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.089655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.280 [2024-12-14 03:18:08.089679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.280 qpair failed and we were unable to recover it. 00:36:53.280 [2024-12-14 03:18:08.089780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.089803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.089982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.090006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.090109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.090137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.090235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.090257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.090358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.090381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.090544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.090567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.090720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.090745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.090847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.090868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.090962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.090983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.091136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.091160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.091272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.091293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.091457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.091480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.091639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.091664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.091770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.091791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.091891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.091914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.092130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.092153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.092276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.092331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.092433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.092460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.092552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.092576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.092665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.092688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.092797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.092822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.092917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.092942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.093104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.093130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.093234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.093260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.093419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.093446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.093543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.093567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.093680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.093704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.093860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.093885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.093987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.094012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.094105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.094135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.094225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.094248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.094414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.094442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.094527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.094551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.094709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.094736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.094887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.094913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.095096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.095120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.095211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.095234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.281 qpair failed and we were unable to recover it. 00:36:53.281 [2024-12-14 03:18:08.095333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.281 [2024-12-14 03:18:08.095356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.095572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.095595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.095700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.095722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.095883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.095909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.096002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.096025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.096115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.096136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.096240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.096261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.096411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.096435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.096656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.096680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.096767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.096789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.096903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.096927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.097017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.097038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.097143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.097174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.097259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.097280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.097479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.097504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.097675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.097698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.097858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.097882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.098087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.098110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.098218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.098240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.098338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.098364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.098471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.098493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.098598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.098620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.098733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.098755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.098853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.098874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.098975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.098997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.099150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.099174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.099266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.099288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.099378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.099401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.099561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.099584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.099811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.099834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.100010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.100033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.100118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.100140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.100295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.100337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.100451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.100474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.100624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.100647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.100735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.100758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.100863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.100885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.100997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.101020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.101171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.101195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.101454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.101479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.101644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.101668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.101775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.101798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.101961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.282 [2024-12-14 03:18:08.101984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.282 qpair failed and we were unable to recover it. 00:36:53.282 [2024-12-14 03:18:08.102148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.102171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.102326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.102350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.102441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.102463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.102614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.102638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.102748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.102771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.102863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.102884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.102968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.102990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.103086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.103109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.103208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.103232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.103329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.103351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.103512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.103536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.103631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.103653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.103822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.103845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.103956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.103979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.104143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.104166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.104251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.104274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.104381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.104404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.104512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.104548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.104669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.104706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.104826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.104858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.104988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.105020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.105204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.105237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.105361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.105395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.105597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.105623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.105783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.105806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.105917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.105948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.106192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.106216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.106440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.106464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.106553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.106575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.106661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.106682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.106785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.106807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.106968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.106991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.107176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.107200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.107387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.107411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.107506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.107527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.107696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.107720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.107868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.107891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.107982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.108005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.108223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.108247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.108490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.108514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.108638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.108661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.108833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.108856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.283 [2024-12-14 03:18:08.108956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.283 [2024-12-14 03:18:08.108978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.283 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.109068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.109090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.109325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.109361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.109481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.109514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.109701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.109734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.109846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.109872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.110038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.110062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.110156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.110178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.110329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.110354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.110539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.110562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.110724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.110747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.110904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.110934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.111083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.111106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.111213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.111238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.111405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.111428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.111579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.111602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.111695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.111717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.111811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.111834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.111925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.111949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.112047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.112070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.112175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.112197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.112298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.112328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.112496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.112519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.112670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.112694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.112815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.112838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.112939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.112961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.113115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.113138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.113308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.113349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.113437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.113461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.113630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.113657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.113744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.113765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.113924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.113947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.114044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.114070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.114161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.114184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.114270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.114293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.114461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.114486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.114642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.114666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.114767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.114789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.114887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.114909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.115165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.115189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.115351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.115376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.115476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.115499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.115683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.115708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.115802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.115825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.115915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.115936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.284 [2024-12-14 03:18:08.116021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.284 [2024-12-14 03:18:08.116043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.284 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.116126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.116148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.116293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.116324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.116442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.116466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.116567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.116591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.116745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.116768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.116931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.116955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.117035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.117064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.117224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.117247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.117341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.117363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.117529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.117552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.117641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.117666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.117824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.117848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.117999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.118022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.118111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.118134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.118293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.118322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.118424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.118447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.118673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.118696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.118782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.118804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.118954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.118988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.119146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.119169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.119328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.119352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.119512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.119535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.119693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.119717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.119799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.119821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.119927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.119950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.120063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.120086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.120176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.120200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.120445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.120470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.120566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.120589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.120806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.120829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.121012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.121036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.121131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.121154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.121334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.121358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.121520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.121551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.121655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.121678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.121785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.121807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.121894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.121918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.122025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.122055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.122210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.122234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.122342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.122365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.122452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.285 [2024-12-14 03:18:08.122475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.285 qpair failed and we were unable to recover it. 00:36:53.285 [2024-12-14 03:18:08.122569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.122592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.122685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.122707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.122803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.122826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.122912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.122933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.123106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.123128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.123216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.123239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.123392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.123416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.123505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.123529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.123702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.123725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.123880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.123903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.124077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.124101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.124341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.124365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.124604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.124627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.124822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.124845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.124933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.124955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.125123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.125146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.125309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.125341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.125498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.125521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.125626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.125649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.125814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.125837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.125944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.125967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.126137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.126160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.126269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.126292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.126452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.126475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.126658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.126684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.126797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.126820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.126927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.126949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.127115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.127138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.127286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.127309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.127402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.127425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.127536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.127558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.127798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.127821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.127918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.127941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.128025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.128047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.128138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.128160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.128333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.128356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.128445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.128468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.128558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.128581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.128669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.128692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.128779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.128802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.128912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.128935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.129015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.129037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.129213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.129235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.129338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.129362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.129517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.286 [2024-12-14 03:18:08.129540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.286 qpair failed and we were unable to recover it. 00:36:53.286 [2024-12-14 03:18:08.129694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.129723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.129875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.129898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.130142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.130165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.130256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.130278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.130376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.130399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.130489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.130512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.130673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.130696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.130785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.130808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.130901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.130924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.131023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.131045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.131131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.131153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.131271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.131294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.131469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.131492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.131592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.131615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.131773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.131796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.131886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.131909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.132015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.132038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.132136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.132159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.132243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.132266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.132369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.132397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.132481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.132504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.132594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.132617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.132722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.132745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.132893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.132916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.132997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.133020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.133116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.133139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.133230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.133254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.133341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.133364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.133438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.133461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.133564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.133587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.133688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.133711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.133787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.133810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.133964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.133987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.134095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.134118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.134284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.134307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.134412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.134436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.134706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.134729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.134813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.134836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.134919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.134943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.135028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.135051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.135142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.135164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.135263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.135287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.135382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.135405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.135554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.135576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.135685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.135708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.135866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.135890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.136057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.136083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.136232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.136255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.136417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.287 [2024-12-14 03:18:08.136440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.287 qpair failed and we were unable to recover it. 00:36:53.287 [2024-12-14 03:18:08.136541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.136563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.136661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.136683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.136771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.136794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.136885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.136908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.136999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.137021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.137175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.137197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.137285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.137308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.137422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.137445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.137530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.137553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.137641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.137664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.137878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.137901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.138073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.138096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.138173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.138197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.138447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.138471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.138561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.138583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.138700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.138723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.138898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.138920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.139075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.139098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.139194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.139216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.139323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.139346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.139436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.139459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.139554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.139578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.139818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.139841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.139943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.139965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.140056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.140079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.140328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.140352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.140449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.140472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.140576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.140598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.140794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.140817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.140909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.140932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.141015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.141038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.141124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.141145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.141229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.141252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.141361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.141384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.141549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.141573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.141723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.141746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.141842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.141865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.141965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.141987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.142139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.142187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.142334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.142370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.142593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.142626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.142731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.142763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.142938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.142970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.143105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.143137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.143322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.143347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.288 qpair failed and we were unable to recover it. 00:36:53.288 [2024-12-14 03:18:08.143505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.288 [2024-12-14 03:18:08.143528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.143626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.143649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.143819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.143842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.143924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.143947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.144097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.144119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.144278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.144301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.144395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.144417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.144506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.144529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.144641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.144664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.144764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.144787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.144878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.144901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.145012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.145035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.145112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.145134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.145214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.145236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.145336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.145359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.145455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.145477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.145630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.145654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.145805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.145828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.145914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.145938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.146135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.146158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.146268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.146302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.146433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.146466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.146577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.146610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.146723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.146748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.146854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.146877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.147094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.147117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.147208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.147230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.147323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.147346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.147498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.147520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.147621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.147644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.147723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.147745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.147831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.147854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.147944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.147966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.148058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.148081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.148252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.148276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.148373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.148396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.148476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.148498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.148653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.148675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.148766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.148788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.148879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.148901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.148979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.149001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.149099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.149122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.149287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.149309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.149408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.149432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.149531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.149553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.149640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.149663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.149758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.149781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.149862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.289 [2024-12-14 03:18:08.149889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.289 qpair failed and we were unable to recover it. 00:36:53.289 [2024-12-14 03:18:08.149991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.150014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.150110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.150132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.150235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.150258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.150358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.150382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.150478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.150501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.150587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.150610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.150703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.150726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.150876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.150899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.151119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.151143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.151296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.151326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.151432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.151455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.151540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.151563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.151680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.151703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.151869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.151892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.151976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.151999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.152222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.152245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.152341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.152364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.152447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.152469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.152578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.152601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.152685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.152708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.152794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.152818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.152903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.152927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.153082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.153104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.153252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.153276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.153379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.153404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.153572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.153595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.153678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.153705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.153926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.153948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.154032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.154055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.154136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.154159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.154255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.154278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.154384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.154408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.154496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.154519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.154599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.154621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.154782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.154805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.154906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.154929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.155011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.155033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.155244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.155267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.155495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.155520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.155610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.155632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.155739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.155762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.156018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.156041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.156209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.156232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.156340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.156363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.156447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.156470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.156647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.156670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.156762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.156786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.156881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.156904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.290 [2024-12-14 03:18:08.156986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.290 [2024-12-14 03:18:08.157009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.290 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.157111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.157134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.157235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.157258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.157357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.157380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.157493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.157516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.157666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.157693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.157797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.157820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.158038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.158061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.158216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.158239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.158346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.158371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.158453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.158476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.158572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.158595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.158763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.158786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.158872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.158895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.159045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.159067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.159169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.159193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.159279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.159303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.159393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.159416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.159522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.159545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.159633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.159656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.159808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.159831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.159983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.160006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.160154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.160176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.160269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.160292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.160396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.160419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.160523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.160546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.160640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.160662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.160762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.160785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.160933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.160956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.161107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.161130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.161228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.161252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.161336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.161360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.161510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.161533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.161619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.161643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.161878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.161901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.162000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.162023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.162172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.162196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.162306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.162355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.162617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.162640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.162786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.162809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.162895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.162918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.163001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.163025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.163181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.163203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.163362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.163386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.163481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.163504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.163591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.163614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.163778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.163802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.163991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.164013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.164120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.164143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.164244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.164267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.164353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.164377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.164475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.291 [2024-12-14 03:18:08.164498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.291 qpair failed and we were unable to recover it. 00:36:53.291 [2024-12-14 03:18:08.164582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.164606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.164705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.164728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.164895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.164918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.165039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.165062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.165267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.165290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.165384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.165408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.165560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.165583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.165676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.165699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.165858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.165881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.165963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.165986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.166086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.166109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.166212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.166235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.166328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.166352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.166433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.166456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.166540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.166563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.166805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.166828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.167004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.167026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.167123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.167146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.167250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.167272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.167462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.167486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.167593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.167616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.167716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.167743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.167912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.167937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.168092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.168116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.168211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.168233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.168337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.168361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.168457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.168480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.168565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.168588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.168684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.168708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.168804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.168827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.168970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.168993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.169077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.169100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.169199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.169222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.169322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.169346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.169439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.169462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.169548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.169571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.169675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.169697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.169787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.169810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.169959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.169982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.170070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.170093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.170192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.170215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.170353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.170377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.170482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.170508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.170594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.170617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.170705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.170728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.170809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.170832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.171067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.171090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.171175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.171198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.292 [2024-12-14 03:18:08.171290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.292 [2024-12-14 03:18:08.171324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.292 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.171412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.171435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.171655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.171678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.171758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.171781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.171948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.171971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.172124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.172147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.172228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.172251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.172404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.172429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.172525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.172547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.172658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.172680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.172827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:53.293 [2024-12-14 03:18:08.172830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.172853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.172960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.172982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.173065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.173088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.173257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.173280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.173454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.173478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.173598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.173621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.173706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.173729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.173825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.173848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.174066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.174088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.174181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.174204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.174381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.174404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.174504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.174527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.174611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.174634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.174793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.174816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.174969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.174992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.175095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.175118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.175209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.175232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.175323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.175350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.175509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.175532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.175754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.175777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.176010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.176033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.176136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.176159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.176327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.176350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.176519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.176542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.176625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.176647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.176817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.176840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.176991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.177014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.177186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.177209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.177404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.177429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.177525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.177548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.177766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.177789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.177896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.177919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.178003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.178027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.178108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.178132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.178227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.178250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.178538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.178564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.178690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.178713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.178808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.178830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.178937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.178960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.179181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.179205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.179294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.179325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.293 [2024-12-14 03:18:08.179556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.293 [2024-12-14 03:18:08.179579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.293 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.179734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.179757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.179922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.179944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.180112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.180141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.180310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.180343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.180434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.180458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.180616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.180640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.180806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.180829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.180916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.180939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.181033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.181056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.181273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.181296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.181486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.181511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.181592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.181615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.181715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.181738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.181835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.181859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.182032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.182055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.182149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.182173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.182274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.182298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.182481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.182505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.182693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.182717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.182799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.182823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.182928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.182952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.183044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.183067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.183157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.183180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.183422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.183448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.183545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.183569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.183667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.183689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.183782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.183805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.183977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.184000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.184157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.184181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.184334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.184362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.184537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.184560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.184714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.184737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.184906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.184929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.185029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.185052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.185149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.185172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.185256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.185279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.185436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.185460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.185622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.185644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.185822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.185845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.185943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.185966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.186066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.186088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.186265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.186289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.186494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.186543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.186673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.186707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.186817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.186849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.187054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.187081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.187182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.187204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.187372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.187402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.187535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.187558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.187663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.187686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.187770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.187794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.294 [2024-12-14 03:18:08.187894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.294 [2024-12-14 03:18:08.187917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.294 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.188066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.188089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.188182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.188205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.188385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.188409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.188499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.188522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.188616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.188639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.188819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.188842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.188952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.188975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.189058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.189080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.189185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.189207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.189309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.189340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.189489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.189512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.189612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.189636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.189790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.189813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.189910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.189933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.190033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.190056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.190143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.190166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.190332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.190355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.190505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.190528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.190660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.190707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.190829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.190862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.191039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.191072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.191275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.191300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.191472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.191496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.191659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.191681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.191773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.191797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.191980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.192003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.192106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.192129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.192223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.192246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.192334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.192357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.192514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.192537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.192638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.192662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.192858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.192882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.192984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.193008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.193109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.193132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.193217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.193240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.193334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.193358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.193454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.193477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.193630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.193653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.193755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.193778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.193885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.193908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.193997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.194021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.194112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.194135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.194290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.194328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.194492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.194516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.194601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.194624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.194749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.194790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.295 [2024-12-14 03:18:08.195041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.295 [2024-12-14 03:18:08.195075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.295 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.195251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.195285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.195417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.195445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.195549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.195572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.195659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.195682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.195696] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:53.296 [2024-12-14 03:18:08.195720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:53.296 [2024-12-14 03:18:08.195727] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:53.296 [2024-12-14 03:18:08.195734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:53.296 [2024-12-14 03:18:08.195739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:53.296 [2024-12-14 03:18:08.195800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.195823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.195979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.196001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.196223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.196246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.196401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.196425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.196579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.196602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.196690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.196713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.196989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.197012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.197176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.197201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 [2024-12-14 03:18:08.197108] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.197215] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:36:53.296 [2024-12-14 03:18:08.197299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.197335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.197337] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:36:53.296 [2024-12-14 03:18:08.197338] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:36:53.296 [2024-12-14 03:18:08.197555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.197577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.197666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.197687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.197911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.197935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.198038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.198061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.198159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.198182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.198352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.198376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.198471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.198494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.198595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.198619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.198715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.198739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.198830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.198853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.199017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.199040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.199265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.199288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.199391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.199415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.199529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.199552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.199645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.199667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.199760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.199783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.200007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.200030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.200181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.200205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.200360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.200384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.200547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.200570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.200765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.200788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.200945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.200968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.201050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.201074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.201176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.201199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.201293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.201331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.201484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.201508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.201614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.201638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.201743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.201766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.201921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.201945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.202134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.202157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.202260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.202284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.202394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.202419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.202503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.202526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.202771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.202795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.202895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.202919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.203083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.203106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.296 qpair failed and we were unable to recover it. 00:36:53.296 [2024-12-14 03:18:08.203344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.296 [2024-12-14 03:18:08.203382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.203508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.203540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.203666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.203698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.203824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.203850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.204078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.204102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.204205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.204229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.204404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.204429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.204684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.204708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.204813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.204835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.205004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.205027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.205179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.205202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.205297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.205335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.205440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.205464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.205663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.205686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.205776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.205800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.205902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.205925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.206041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.206064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.206147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.206170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.206410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.206434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.206535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.206559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.206805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.206828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.206942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.206973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.207100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.207124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.207310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.207342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.207442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.207466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.207563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.207586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.207684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.207706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.207858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.207886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.208107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.208131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.208220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.208243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.208473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.208498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.208611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.208635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.208729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.208752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.208990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.209015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.209175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.209200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.209320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.209344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.209439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.209462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.209556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.209580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.209797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.209820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.209918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.209941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.210095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.210119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.210234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.210257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.210502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.210527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.210788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.210812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.210982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.211005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.211199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.211222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.211325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.211348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.211502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.211524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.211614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.211637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.211858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.211881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.211979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.212002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.212171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.212195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.212289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.212321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.212439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.212462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.212622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.212649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.212732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.212754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.297 qpair failed and we were unable to recover it. 00:36:53.297 [2024-12-14 03:18:08.212926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.297 [2024-12-14 03:18:08.212950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.213115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.213139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.213225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.213247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.213348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.213373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.213540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.213564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.213664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.213687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.213780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.213804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.213975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.213998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.214095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.214118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.214219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.214244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.214417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.214441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.214608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.214631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.214729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.214755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.214920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.214943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.215097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.215121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.215215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.215238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.215411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.215435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.215530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.215553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.215655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.215678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.215785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.215810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.215893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.215914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.216068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.216093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.216172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.216196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.216289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.216321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.216412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.216434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.216519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.216551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.216776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.216800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.217030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.217054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.217154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.217178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.217352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.217376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.217480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.217503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.217658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.217681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.217786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.217808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.217972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.217998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.218114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.218138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.218294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.218332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.218494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.218518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.218687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.218710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.218812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.218836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.218993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.219018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.219246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.219269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.219432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.219457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.219544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.219567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.219717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.219741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.220003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.220027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.220199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.220224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.220390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.220414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.220530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.220554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.220720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.220744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.220859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.220881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.220979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.221003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.221097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.221121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.221356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.221381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.221542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.221566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.221736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.221759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.221845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.221870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.222028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.222052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.298 [2024-12-14 03:18:08.222138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.298 [2024-12-14 03:18:08.222162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.298 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.222274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.222297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.222466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.222491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.222592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.222615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.222767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.222792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.222958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.222981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.223138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.223162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.223261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.223284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.223397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.223420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.223645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.223699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.223839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.223872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.224051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.224084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.224210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.224236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.224383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.224408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.224557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.224580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.224730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.224753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.224855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.224880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.225048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.225071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.225166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.225189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.225430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.225454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.225539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.225562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.225657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.225680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.225778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.225802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.225919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.225943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.226040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.226064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.226231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.226254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.226440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.226465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.226629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.226652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.226823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.226846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.226955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.226978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.227217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.227241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.227429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.227453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.227638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.227662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.227767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.227792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.227876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.227897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.228074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.228097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.228249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.228279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.228387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.228411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.228583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.228607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.228776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.228801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.228906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.228931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.229043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.229067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.229228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.229255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.229372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.229400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.229507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.229532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.229634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.229661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.229760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.229786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.229963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.229989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.230089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.230114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.230287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.230332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.230490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.230515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.230599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.230622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.230742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.230766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.231010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.231033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.231136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.231160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.231378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.231403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.299 [2024-12-14 03:18:08.231620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.299 [2024-12-14 03:18:08.231644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.299 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.231812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.231836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.231941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.231965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.232066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.232089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.232190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.232214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.232382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.232406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.232595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.232619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.232721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.232749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.232913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.232937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.233051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.233075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.233296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.233329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.233474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.233497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.233656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.233679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.233784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.233806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.233902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.233926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.234023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.234046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.234152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.234176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.234267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.234291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.234525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.234586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.234697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.234731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.234853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.234886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.235072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.235105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.235309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.235353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.235533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.235566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.235815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.235842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.235993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.236016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.236179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.236202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.236287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.236309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.236409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.236432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.236537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.236560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.236726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.236750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.236900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.236923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.237076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.237099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.237323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.237347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.237536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.237563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.237712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.237736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.237894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.237917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.238015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.238038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.238191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.238214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.238333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.238358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.238441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.238465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.238573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.238596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.238695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.238719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.238809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.238831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.238925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.238950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.239141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.239163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.239330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.239355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.239553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.239576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.239687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.239709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.239856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.239880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.240030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.240054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.240223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.240246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.240406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.240430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.240592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.240615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.240837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.240860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.240959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.240980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.241088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.241112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.241265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.241289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.300 qpair failed and we were unable to recover it. 00:36:53.300 [2024-12-14 03:18:08.241408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.300 [2024-12-14 03:18:08.241432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.241598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.241622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.241732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.241756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.241911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.241940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.242166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.242192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.242353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.242379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.242553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.242578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.242685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.242707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.242803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.242825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.243041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.243066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.243170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.243195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.243296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.243328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.243433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.243458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.243548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.243569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.243718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.243742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.243829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.243851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.244076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.244099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.244283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.244335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.244579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.244612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.244785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.244817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.244997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.245031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.245214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.245247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.245407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.245444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.245563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.245596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.245714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.245746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.246006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.246040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.246217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.246248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.246352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.246375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.246527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.246551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.246632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.246654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.246819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.246849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.247095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.247120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.247221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.247245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.247490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.247515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.247766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.247790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.247938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.247962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.248127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.248150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.248231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.248252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.248406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.248431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.248528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.248550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.248648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.248672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.248761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.248783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.248936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.248959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.249042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.249064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.249260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.249304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.249499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.249532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.249649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.249682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.249798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.249831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.249945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.249979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.250192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.250226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.250467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.250502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.250614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.250639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.250727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.250750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.250839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.250861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.251084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.251110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.251228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.251254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.251368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.251394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.301 qpair failed and we were unable to recover it. 00:36:53.301 [2024-12-14 03:18:08.251490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.301 [2024-12-14 03:18:08.251519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.251702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.251726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.251820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.251842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.251927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.251949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.252029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.252050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.252205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.252229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.252337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.252360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.252468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.252492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.252603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.252628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.252725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.252748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.252848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.252871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.253021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.253044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.253128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.253150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.253239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.253261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.253411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.253458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.253703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.253749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.253996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.254030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.254202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.254234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.254359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.254394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.254652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.254684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.254794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.254826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.254922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.254954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.255141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.255174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.255379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.255406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.255494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.255515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.255668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.255691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.255789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.255812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.256031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.256059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.256176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.256199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.256298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.256331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.256436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.256460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.256600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.256624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.256711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.256733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.256907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.256930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.257098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.257120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.257285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.257308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.257467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.257490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.257585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.257609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.257843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.257866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.258036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.258060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.258154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.258176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.258279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.258327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.258449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.258483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.258610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.258641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.258836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.258868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.258979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.259011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.259224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.259256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.259395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.259428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.259543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.259575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.259739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.259772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.259953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.259980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.260071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.260093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.260198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.260221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.260327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.260352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.260501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.260528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.260676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.260700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.260853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.260876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.260968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.260990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.302 [2024-12-14 03:18:08.261142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.302 [2024-12-14 03:18:08.261165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.302 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.261262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.261285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.261390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.261415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.261513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.261536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.261615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.261637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.261796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.261819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.261920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.261941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.262034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.262057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.262151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.262172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.262329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.262353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.262453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.262476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.262569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.262592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.262680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.262702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.262882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.262904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.262989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.263011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.263133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.263157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.263329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.263353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.263514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.263537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.263690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.263714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.263955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.263978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.264139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.264162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.264332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.264356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.264459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.264483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.264634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.264657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.264776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.264799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.264886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.264908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.264989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.265012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.265161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.265184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.265371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.265395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.265489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.265512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.265609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.265632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.265852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.265875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.265957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.265979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.266073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.266097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.266265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.266289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.266470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.266495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.266583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.266606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.266720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.266756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b58000b90 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.266889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.266927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.267039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.267071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.267191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.267216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.267329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.267353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.267435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.267458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.267563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.267587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.267744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.267767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.267875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.267898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.267999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.268024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.268203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.268226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.268385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.268409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.268520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.268544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.268703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.268726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.268877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.268900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.268981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.269004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.269110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.269133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.269326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.269350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.269570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.269593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.269776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.269800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.269989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.270013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.270113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.270136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.270238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.270261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.270372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.270396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.303 qpair failed and we were unable to recover it. 00:36:53.303 [2024-12-14 03:18:08.270511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.303 [2024-12-14 03:18:08.270535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.270684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.270708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.270862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.270885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.270975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.271002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.271160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.271183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.271277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.271300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.271527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.271550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.271631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.271654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.271805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.271829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.272039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.272061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.272308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.272341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.272560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.272583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.272735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.272758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.272853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.272875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.273038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.273061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.273228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.273251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.273406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.273430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.273585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.273609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.273803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.273826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.273948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.273970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.274155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.274179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.274282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.274305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.274429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.274475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.274726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.274750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.274858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.274882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.275033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.275056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.275157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.275181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.275425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.275449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.275613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.275637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.275734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.275757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.275857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.275884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.275987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.276010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.276204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.276227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.276432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.276456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.276629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.276652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.276808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.276831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.277004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.277028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.277138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.277162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.277285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.277308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.277557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.277580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.277670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.277692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.277797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.277820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.277974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.277996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.278166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.278190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.278298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.278331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.278505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.278528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.278714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.278738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.278895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.278918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.279028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.279051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.279217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.279241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.279418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.279443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.279597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.279619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.279771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.279793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.279906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.279929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.280024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.280048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.280144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.280166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.280270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.280294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.280454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.280481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.280583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.280606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.280777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.304 [2024-12-14 03:18:08.280800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.304 qpair failed and we were unable to recover it. 00:36:53.304 [2024-12-14 03:18:08.280896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.280919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.281032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.281056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.281141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.281164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.281326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.281351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.281504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.281527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.281692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.281715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.281805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.281839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.281926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.281948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.282102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.282125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.282234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.282257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.282476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.282500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.282610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.282634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.282784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.282807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.282979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.283002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.283107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.283130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.283224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.283246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.283361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.283385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.283475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.283498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.283595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.283618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.283729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.283752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.283968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.283992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.284093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.284117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.284268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.284291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.284395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.284418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.284518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.284542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.284771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.284794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.285029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.285052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.285206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.285229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.285327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.285351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.285502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.285524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.285676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.285700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.285807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.285831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:53.305 [2024-12-14 03:18:08.286004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.286028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.286189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.286212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:53.305 [2024-12-14 03:18:08.286376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.286400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.286585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.286609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:53.305 [2024-12-14 03:18:08.286695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.286719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.286809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.286833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.286920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:53.305 [2024-12-14 03:18:08.286943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.287058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.287080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:53.305 [2024-12-14 03:18:08.287236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.287260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.287413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.287437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.287533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.287556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.287667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.287691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.287849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.287873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.288025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.288049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.288200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.288224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.288310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.288355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.288455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.288479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.288574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.288596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.288823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.288847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.289029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.289052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.289220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.289243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.289485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.289509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.289660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.289682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.289787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.289810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.290033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.290059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.290233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.290255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.290345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.290366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.290540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.290563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.290653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.290675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.290769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.305 [2024-12-14 03:18:08.290791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.305 qpair failed and we were unable to recover it. 00:36:53.305 [2024-12-14 03:18:08.290957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.290980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.291071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.291095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.291246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.291271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.291380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.291405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.291498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.291521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.291675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.291698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.291810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.291833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.291940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.291964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.292183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.292206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.292291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.292323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.292418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.292441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.292533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.292556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.292646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.292670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.292840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.292864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.292952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.292974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.293083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.293105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.293284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.293306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.293425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.293449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.293541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.293563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.293651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.293677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.293831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.293854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.293952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.293975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.294122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.294146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.294346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.294371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.294473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.294497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.294596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.294619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.294806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.294830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.295025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.295049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.295218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.295244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.295351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.295374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.295583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.295606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.295698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.295721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.295884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.295907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.296054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.296077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.296191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.296214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.296322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.296346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.296431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.296452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.296572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.296596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.296714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.296737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.296819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.296844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.296929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.296950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.297037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.297062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.297168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.297193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.297345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.297370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.297458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.297480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.297630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.297654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.297823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.297847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.297932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.297958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.298069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.298094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.298203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.298227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.298344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.298369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.298454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.298478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.298638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.298661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.298761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.298786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.298936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.298960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.299122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.299149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.299334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.299359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.299526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.299549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.299631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.299654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.299801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.299825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.300005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.300028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.300135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.300158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.300256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.300279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.300395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.300419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.300503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.306 [2024-12-14 03:18:08.300525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.306 qpair failed and we were unable to recover it. 00:36:53.306 [2024-12-14 03:18:08.300620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.300644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.300816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.300840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.300924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.300947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.301101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.301124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.301210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.301233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.301419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.301444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.301527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.301550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.301640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.301664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.301758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.301781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.301865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.301890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.302044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.302067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.302166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.302189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.302289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.302322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.302495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.302518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.302614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.302637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.302724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.302747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.302839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.302862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.302961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.302984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.303102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.303126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.303240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.303264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.303486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.303511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.303611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.303634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.303762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.303785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.303883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.303906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.304060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.304083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.304178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.304201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.304285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.304308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.304492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.304516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.304687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.304710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.304796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.304819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.304905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.304928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.305109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.305157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.305298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.305346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.305472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.305504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.305615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.305648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.305824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.305855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.305965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.305997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.306118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.306145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.306323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.306348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.306446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.306469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.306561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.306584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.306684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.306708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.306862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.306885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.306970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.306993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.307098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.307121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.307201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.307225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.307328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.307353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.307437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.307460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.307564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.307587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.307700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.307723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.307818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.307842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.307926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.307954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.308035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.308058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.308157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.308180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.308281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.308304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.308397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.308420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.308510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.308536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.308703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.308725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.308829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.307 [2024-12-14 03:18:08.308863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.307 qpair failed and we were unable to recover it. 00:36:53.307 [2024-12-14 03:18:08.308972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.309007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.309186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.309217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.309324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.309358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.309541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.309574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.309690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.309721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.309824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.309856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.309988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.310019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.310124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.310157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.310265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.310290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.310415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.310456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.310587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.310619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.310755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.310786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.310903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.310943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.311057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.311090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.311209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.311241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.311350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.311376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.311542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.311566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.311651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.311673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.311766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.311788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.311877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.311901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.311995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.312017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.312100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.312123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.312207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.312233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.312331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.312356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.312524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.312547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.312645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.312668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.312775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.312800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.312886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.312910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.312996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.313019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.313101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.313124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.313215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.313237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.313349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.313373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.313470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.313494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.313645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.313670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.313758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.313781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.313870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.308 [2024-12-14 03:18:08.313894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.308 qpair failed and we were unable to recover it. 00:36:53.308 [2024-12-14 03:18:08.313983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.314007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.314090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.314113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.314199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.314222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.314320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.314344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.314435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.314457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.314536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.314559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.314713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.314741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.314830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.314853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.314938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.314960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.315056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.315080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.315168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.315191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.315348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.315372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.315470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.315492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.315578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.315601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.315677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.315700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.315795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.315818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.315914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.315937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.316035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.316070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.316180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.316212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.316326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.316360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.316487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.316519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.316690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.316722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.316838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.316869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.317034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.317060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.317222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.317245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.317331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.317355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.317451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.317474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.317594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.317617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.317713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.317738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.317824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.317847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.317941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.317965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.318128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.318153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.318235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.318258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.318355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.318378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.318468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.318491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.318585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.318607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.318699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.318722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.318835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.318860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.318948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.318973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.319059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.319081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.319168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.319192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.319281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.319304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.319400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.319424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.319513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.319536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.319639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.319675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.309 [2024-12-14 03:18:08.319784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.309 [2024-12-14 03:18:08.319815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.309 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.319927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.319960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.320056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.320081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.320173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.320197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.320278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.320301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.320419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.320443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.320541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.320564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.320647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.320670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.320761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.320784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.320868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.320891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.320980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.321003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.321090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.321113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.321208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.321231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.321337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.321361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.321449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.321472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.321551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.321573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.321653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.321677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.321825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.321849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.321932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.321955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.322044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.322067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.322149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.322172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.322253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.322275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.322395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.322420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:53.310 [2024-12-14 03:18:08.322514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.322539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.322638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.322661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.322744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.322767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.322869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:53.310 [2024-12-14 03:18:08.322894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.322984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.323007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.323158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.323182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.310 [2024-12-14 03:18:08.323258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.323282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.323372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.323396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:53.310 [2024-12-14 03:18:08.323559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.323584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.323667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.323689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.323784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.323808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.323929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.323952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.324052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.324074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.324157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.324181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.324279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.324302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.324529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.324552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.324636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.324658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.324756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.324778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.324865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.324888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.324969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.324991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.325079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.310 [2024-12-14 03:18:08.325102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.310 qpair failed and we were unable to recover it. 00:36:53.310 [2024-12-14 03:18:08.325183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.325206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.325300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.325332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.325434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.325457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.325543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.325567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.325650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.325672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.325756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.325779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.325934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.325957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.326041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.326067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.326168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.326191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.326277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.326300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.326396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.326419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.326505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.326527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.326613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.326636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.326715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.326738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.326825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.326848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.327000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.327022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.327108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.327131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.327282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.327305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.327401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.327424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.327522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.327544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.327647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.327669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.327751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.327774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.327857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.327881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.328031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.328054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.328154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.328176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.328357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.328381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.328479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.328501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.328581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.328604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.328698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.328721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.328875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.328898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.329049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.329071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.329158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.329181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.329272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.329295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.329388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.329412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.329494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.329520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.329618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.329641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.329727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.329750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.329833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.329857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.329958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.329981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.330079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.330102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.311 qpair failed and we were unable to recover it. 00:36:53.311 [2024-12-14 03:18:08.330188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.311 [2024-12-14 03:18:08.330210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.330293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.330322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.330398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.330422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.330581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.330604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.330715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.330737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.330815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.330838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.330922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.330945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.331040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.331062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.331183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.331206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.331367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.331391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.331544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.331566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.331664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.331688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.331843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.331866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.331949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.331973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.332067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.332089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.332183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.332206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.332366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.332390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.332534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.332557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.332659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.332682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.332797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.332820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.332924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.332947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.333051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.333074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.333159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.333183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.333263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.333286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.333390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.333414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.333518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.333542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.333634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.333657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.333742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.333765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.333864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.333886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.334039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.334062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.334171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.334194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.334279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.334302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.334396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.334419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.334519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.334542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.334738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.334761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.335000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.335035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.335144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.335177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.335353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.335386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.335497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.335530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.335669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.335700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.335815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.335846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.335956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.335981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.336137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.336160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.336309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.336342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.336425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.312 [2024-12-14 03:18:08.336449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.312 qpair failed and we were unable to recover it. 00:36:53.312 [2024-12-14 03:18:08.336556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.336579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.336750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.336773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.336939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.336962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.337052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.337075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.337179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.337202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.337361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.337386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.337484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.337507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.337664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.337687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.337834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.337857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.337939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.337962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.338113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.338136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.338249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.338272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.338379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.338402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.338566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.338588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.338678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.338701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.338799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.338821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.338975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.338998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.339091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.339118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.339217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.339239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.339326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.339349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.339437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.339460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.339540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.339562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.339726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.339748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.339911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.339934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.340081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.340104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.340188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.340211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.340320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.340343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.340454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.340477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.340560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.340583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.340688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.340711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.340869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.340892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.340975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.340998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.341154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.341177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.341278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.341301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.341460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.341483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.341580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.341603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.341693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.341716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.341962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.341985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.342069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.342092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.342243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.342266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.342379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.342402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.342569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.342592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.342692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.342715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.342810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.342833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.343005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.343032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.343122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-14 03:18:08.343145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-14 03:18:08.343242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.343265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.343508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.343532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.343713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.343736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.343830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.343853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.343951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.343974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.344068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.344090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.344282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.344305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.344473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.344497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.344621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.344657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.344787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.344820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.344948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.344980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b50000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.345086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.345110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.345271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.345294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.345500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.345523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.345692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.345716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.345867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.345890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.345979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.346003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.346097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.346120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.346339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.346363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.346476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.346499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.346745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.346768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.347004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.347028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.347123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.347146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.347322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.347346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.347602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.347625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.347776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.347804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.347891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.347914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.348129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.348152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.348247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.348270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.348524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.348548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.348699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.348722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.348823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.348846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.348945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.348968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.349215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.349238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.349349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.349373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.349468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.349492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.349653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.349676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.349763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.349786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.349948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.349972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.350144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.350168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.350329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.350353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.350507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.350531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.350622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.350646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.350811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.350834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.351000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.351024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-14 03:18:08.351130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-14 03:18:08.351153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.351235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.351259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.351411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.351435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.351523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.351546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.351768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.351792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.351949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.351973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.352141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.352164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.352255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.352278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.352560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.352585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.352688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.352711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.352808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.352831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.353021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.353045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.353215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.353238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.353354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.353379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.353542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.353566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.353723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.353747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.353845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.353868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.353951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.353974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.354126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.354149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.354296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.354328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.354481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.354504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.354732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.354756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.354910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.354933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.355101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.355124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.355308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.355341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.355444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.355467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.355565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.355588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.355773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.355796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.355954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.355978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.356073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.356096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.356175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.356198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.356302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.356335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.356550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.356573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.356678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.356702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.356801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.356824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.357051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.357075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 Malloc0 00:36:53.315 [2024-12-14 03:18:08.357308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.357355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.357523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.357546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.357760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.357783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.315 [2024-12-14 03:18:08.357947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.357970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.358064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.358086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.358180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.358204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:53.315 [2024-12-14 03:18:08.358357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.358382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-14 03:18:08.358484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-14 03:18:08.358507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.316 [2024-12-14 03:18:08.358674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.358697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.358853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.358876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.359039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.359065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.359165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.359189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.359366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.359390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.359606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.359628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.359742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.359765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.359929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.359952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.360119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.360141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.360299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.360331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.360445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.360469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.360629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.360652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.360754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.360777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.360859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.360882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.361046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.361069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.361231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.361254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.361354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.361379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.361482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.361505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.361684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.361707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.361792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.361814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.361965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.361989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.362144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.362167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.362264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.362288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.362412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.362435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.362522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.362545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.362694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.362717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.362815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.362838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.363009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.363032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.363120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.363143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.363324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.363348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.363455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.363478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.363580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.363603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.363808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.363832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.363992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.364016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.364113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.364136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.364290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.364323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.364418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.364441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.364527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.364550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.364609] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:53.316 [2024-12-14 03:18:08.364707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.364731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.364842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.364864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.365028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.365051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.365231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.365254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.365353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.365376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.365465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.365489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.365656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.365679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.365769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.365792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.365971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.365995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-14 03:18:08.366079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-14 03:18:08.366101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.366282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.366305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.366410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.366433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.366581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.366604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.366822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.366846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.366945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.366967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.367053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.367077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.367240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.367264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.367357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.367380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.367542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.367569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.367674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.367697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.367864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.367887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.367986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.368009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.368159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.368184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.368346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.368370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.368538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.368561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.368821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.368843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.369024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.369048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.369146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.369169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.369256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.369279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.369383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.369407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.369561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.369583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.369732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.369754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.369857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.369881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.370047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.370070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.317 [2024-12-14 03:18:08.370249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.370272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.370365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.370388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.370542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.370565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:53.317 [2024-12-14 03:18:08.370662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.370685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.370961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.317 [2024-12-14 03:18:08.370984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.371156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.371180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:53.317 [2024-12-14 03:18:08.371334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.371358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.371444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.371467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.371615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.371638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.371739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.371762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.371943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.371967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.372133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.372156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.372243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.372265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.372359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.372383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.372497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.372521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.372673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.372696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.372854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.372878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.372963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.372986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.373074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.373097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.373198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.373222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.373338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.373362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.373535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.373559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-14 03:18:08.373663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-14 03:18:08.373686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.373814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.373853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.374038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.374071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.374194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.374226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.374386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.374420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.374614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.374646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.375261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.375310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.375521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.375547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.375821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.375844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.376008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.376031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.376275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.376298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.376474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.376497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.376607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.376631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.376797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.376820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.376909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.376932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.377149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.377172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.377286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.377309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.377439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.377463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.377625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.377648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.377811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.377835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.378038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.378061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.378161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.378183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.318 [2024-12-14 03:18:08.378297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.378329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.378545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.378568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:53.318 [2024-12-14 03:18:08.378672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.378694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.378910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.378932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.318 [2024-12-14 03:18:08.379084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.379107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:53.318 [2024-12-14 03:18:08.379278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.379302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.379471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.379494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.379618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.379642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.379773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.379796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.379960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.379983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.380077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.380099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.380209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.380231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.380353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.380376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.380493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.380515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.380624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.380645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.380740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.380763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.380936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.380959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.381072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.381095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.381193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.381218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.381328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-14 03:18:08.381352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-14 03:18:08.381473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-14 03:18:08.381496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-14 03:18:08.381607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-14 03:18:08.381630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-14 03:18:08.381725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-14 03:18:08.381747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-14 03:18:08.381929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-14 03:18:08.381951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-14 03:18:08.382050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-14 03:18:08.382074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-14 03:18:08.382180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-14 03:18:08.382203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-14 03:18:08.382323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-14 03:18:08.382346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.382445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.382468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.382552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.382575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.382685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.382709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.382802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.382825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.382910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.382937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.383035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.383058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.383149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.383172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.383265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.383288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.383402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.383426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.383517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.383540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.383627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.383650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.383750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.383773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.383875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.383897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.384098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.384121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.384381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.384405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.384492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.384515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.384603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.384626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.384796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.384819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.384976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.384999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.385154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.385177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.385342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.385366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.385543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.385566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.385650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.385673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.385773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.385797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.385977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.386000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.386151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.386174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.581 [2024-12-14 03:18:08.386276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.386299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.386458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.581 [2024-12-14 03:18:08.386481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.581 qpair failed and we were unable to recover it. 00:36:53.581 [2024-12-14 03:18:08.386647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.582 [2024-12-14 03:18:08.386670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.582 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:53.582 qpair failed and we were unable to recover it. 00:36:53.582 [2024-12-14 03:18:08.386755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.582 [2024-12-14 03:18:08.386778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.582 qpair failed and we were unable to recover it. 00:36:53.582 [2024-12-14 03:18:08.386889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.582 [2024-12-14 03:18:08.386917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.582 qpair failed and we were unable to recover it. 00:36:53.582 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.582 [2024-12-14 03:18:08.387139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.582 [2024-12-14 03:18:08.387163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.582 qpair failed and we were unable to recover it. 00:36:53.582 [2024-12-14 03:18:08.387263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.582 [2024-12-14 03:18:08.387287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.582 qpair failed and we were unable to recover it. 00:36:53.582 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:53.582 [2024-12-14 03:18:08.387453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.582 [2024-12-14 03:18:08.387477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.582 qpair failed and we were unable to recover it. 00:36:53.582 [2024-12-14 03:18:08.387699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.582 [2024-12-14 03:18:08.387722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.582 qpair failed and we were unable to recover it. 00:36:53.582 [2024-12-14 03:18:08.387904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.582 [2024-12-14 03:18:08.387926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.582 qpair failed and we were unable to recover it. 00:36:53.582 [2024-12-14 03:18:08.388118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.582 [2024-12-14 03:18:08.388141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.582 qpair failed and we were unable to recover it. 00:36:53.582 [2024-12-14 03:18:08.388235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.582 [2024-12-14 03:18:08.388259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.582 qpair failed and we were unable to recover it. 00:36:53.582 [2024-12-14 03:18:08.388422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.582 [2024-12-14 03:18:08.388446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.582 qpair failed and we were unable to recover it. 00:36:53.582 [2024-12-14 03:18:08.388555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.582 [2024-12-14 03:18:08.388578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.582 qpair failed and we were unable to recover it. 00:36:53.582 [2024-12-14 03:18:08.388746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.582 [2024-12-14 03:18:08.388768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.582 qpair failed and we were unable to recover it. 00:36:53.582 [2024-12-14 03:18:08.388866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.582 [2024-12-14 03:18:08.388889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.582 qpair failed and we were unable to recover it. 00:36:53.582 [2024-12-14 03:18:08.389078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.582 [2024-12-14 03:18:08.389101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca6cd0 with addr=10.0.0.2, port=4420 00:36:53.582 qpair failed and we were unable to recover it. 00:36:53.582 [2024-12-14 03:18:08.389259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.582 [2024-12-14 03:18:08.389342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.582 qpair failed and we were unable to recover it. 00:36:53.582 [2024-12-14 03:18:08.389531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.582 [2024-12-14 03:18:08.389566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b4c000b90 with addr=10.0.0.2, port=4420 00:36:53.582 qpair failed and we were unable to recover it. 00:36:53.582 [2024-12-14 03:18:08.389740] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:53.582 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.582 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:53.582 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.582 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:53.582 [2024-12-14 03:18:08.395304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.582 [2024-12-14 03:18:08.395442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.582 [2024-12-14 03:18:08.395486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.582 [2024-12-14 03:18:08.395510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.582 [2024-12-14 03:18:08.395530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.582 [2024-12-14 03:18:08.395584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.582 qpair failed and we were unable to recover it. 00:36:53.582 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.582 03:18:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 391365 00:36:53.582 [2024-12-14 03:18:08.405186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.582 [2024-12-14 03:18:08.405274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.582 [2024-12-14 03:18:08.405305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.582 [2024-12-14 03:18:08.405334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.582 [2024-12-14 03:18:08.405350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.582 [2024-12-14 03:18:08.405386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.582 qpair failed and we were unable to recover it. 00:36:53.582 [2024-12-14 03:18:08.415183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.582 [2024-12-14 03:18:08.415250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.582 [2024-12-14 03:18:08.415271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.582 [2024-12-14 03:18:08.415283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.582 [2024-12-14 03:18:08.415293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.582 [2024-12-14 03:18:08.415328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.582 qpair failed and we were unable to recover it. 00:36:53.582 [2024-12-14 03:18:08.425253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.582 [2024-12-14 03:18:08.425360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.582 [2024-12-14 03:18:08.425374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.582 [2024-12-14 03:18:08.425382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.582 [2024-12-14 03:18:08.425389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.582 [2024-12-14 03:18:08.425405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.582 qpair failed and we were unable to recover it. 00:36:53.582 [2024-12-14 03:18:08.435186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.582 [2024-12-14 03:18:08.435245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.582 [2024-12-14 03:18:08.435259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.582 [2024-12-14 03:18:08.435267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.582 [2024-12-14 03:18:08.435273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.582 [2024-12-14 03:18:08.435289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.582 qpair failed and we were unable to recover it. 00:36:53.582 [2024-12-14 03:18:08.445254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.582 [2024-12-14 03:18:08.445315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.582 [2024-12-14 03:18:08.445329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.582 [2024-12-14 03:18:08.445336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.582 [2024-12-14 03:18:08.445342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.582 [2024-12-14 03:18:08.445357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.582 qpair failed and we were unable to recover it. 00:36:53.582 [2024-12-14 03:18:08.455214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.583 [2024-12-14 03:18:08.455266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.583 [2024-12-14 03:18:08.455280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.583 [2024-12-14 03:18:08.455287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.583 [2024-12-14 03:18:08.455293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.583 [2024-12-14 03:18:08.455309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.583 qpair failed and we were unable to recover it. 00:36:53.583 [2024-12-14 03:18:08.465268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.583 [2024-12-14 03:18:08.465356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.583 [2024-12-14 03:18:08.465371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.583 [2024-12-14 03:18:08.465378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.583 [2024-12-14 03:18:08.465385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.583 [2024-12-14 03:18:08.465400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.583 qpair failed and we were unable to recover it. 00:36:53.583 [2024-12-14 03:18:08.475372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.583 [2024-12-14 03:18:08.475435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.583 [2024-12-14 03:18:08.475448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.583 [2024-12-14 03:18:08.475456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.583 [2024-12-14 03:18:08.475462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.583 [2024-12-14 03:18:08.475477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.583 qpair failed and we were unable to recover it. 00:36:53.583 [2024-12-14 03:18:08.485356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.583 [2024-12-14 03:18:08.485411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.583 [2024-12-14 03:18:08.485424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.583 [2024-12-14 03:18:08.485431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.583 [2024-12-14 03:18:08.485437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.583 [2024-12-14 03:18:08.485453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.583 qpair failed and we were unable to recover it. 00:36:53.583 [2024-12-14 03:18:08.495364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.583 [2024-12-14 03:18:08.495461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.583 [2024-12-14 03:18:08.495474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.583 [2024-12-14 03:18:08.495481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.583 [2024-12-14 03:18:08.495487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.583 [2024-12-14 03:18:08.495502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.583 qpair failed and we were unable to recover it. 00:36:53.583 [2024-12-14 03:18:08.505379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.583 [2024-12-14 03:18:08.505440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.583 [2024-12-14 03:18:08.505456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.583 [2024-12-14 03:18:08.505464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.583 [2024-12-14 03:18:08.505470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.583 [2024-12-14 03:18:08.505485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.583 qpair failed and we were unable to recover it. 00:36:53.583 [2024-12-14 03:18:08.515413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.583 [2024-12-14 03:18:08.515478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.583 [2024-12-14 03:18:08.515491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.583 [2024-12-14 03:18:08.515498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.583 [2024-12-14 03:18:08.515504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.583 [2024-12-14 03:18:08.515520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.583 qpair failed and we were unable to recover it. 00:36:53.583 [2024-12-14 03:18:08.525437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.583 [2024-12-14 03:18:08.525489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.583 [2024-12-14 03:18:08.525503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.583 [2024-12-14 03:18:08.525509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.583 [2024-12-14 03:18:08.525515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.583 [2024-12-14 03:18:08.525531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.583 qpair failed and we were unable to recover it. 00:36:53.583 [2024-12-14 03:18:08.535513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.583 [2024-12-14 03:18:08.535565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.583 [2024-12-14 03:18:08.535578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.583 [2024-12-14 03:18:08.535584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.583 [2024-12-14 03:18:08.535591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.583 [2024-12-14 03:18:08.535606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.583 qpair failed and we were unable to recover it. 00:36:53.583 [2024-12-14 03:18:08.545569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.583 [2024-12-14 03:18:08.545627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.583 [2024-12-14 03:18:08.545640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.583 [2024-12-14 03:18:08.545647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.583 [2024-12-14 03:18:08.545656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.583 [2024-12-14 03:18:08.545672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.583 qpair failed and we were unable to recover it. 00:36:53.583 [2024-12-14 03:18:08.555546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.583 [2024-12-14 03:18:08.555604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.583 [2024-12-14 03:18:08.555617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.583 [2024-12-14 03:18:08.555624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.583 [2024-12-14 03:18:08.555630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.583 [2024-12-14 03:18:08.555645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.583 qpair failed and we were unable to recover it. 00:36:53.583 [2024-12-14 03:18:08.565567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.583 [2024-12-14 03:18:08.565620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.583 [2024-12-14 03:18:08.565633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.583 [2024-12-14 03:18:08.565640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.583 [2024-12-14 03:18:08.565646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.583 [2024-12-14 03:18:08.565662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.583 qpair failed and we were unable to recover it. 00:36:53.583 [2024-12-14 03:18:08.575631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.583 [2024-12-14 03:18:08.575695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.583 [2024-12-14 03:18:08.575708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.583 [2024-12-14 03:18:08.575715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.583 [2024-12-14 03:18:08.575721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.583 [2024-12-14 03:18:08.575736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.583 qpair failed and we were unable to recover it. 00:36:53.583 [2024-12-14 03:18:08.585563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.583 [2024-12-14 03:18:08.585618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.584 [2024-12-14 03:18:08.585631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.584 [2024-12-14 03:18:08.585637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.584 [2024-12-14 03:18:08.585644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.584 [2024-12-14 03:18:08.585659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.584 qpair failed and we were unable to recover it. 00:36:53.584 [2024-12-14 03:18:08.595650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.584 [2024-12-14 03:18:08.595753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.584 [2024-12-14 03:18:08.595766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.584 [2024-12-14 03:18:08.595773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.584 [2024-12-14 03:18:08.595779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.584 [2024-12-14 03:18:08.595794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.584 qpair failed and we were unable to recover it. 00:36:53.584 [2024-12-14 03:18:08.605672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.584 [2024-12-14 03:18:08.605728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.584 [2024-12-14 03:18:08.605741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.584 [2024-12-14 03:18:08.605748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.584 [2024-12-14 03:18:08.605754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.584 [2024-12-14 03:18:08.605770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.584 qpair failed and we were unable to recover it. 00:36:53.584 [2024-12-14 03:18:08.615694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.584 [2024-12-14 03:18:08.615747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.584 [2024-12-14 03:18:08.615759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.584 [2024-12-14 03:18:08.615766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.584 [2024-12-14 03:18:08.615772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.584 [2024-12-14 03:18:08.615787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.584 qpair failed and we were unable to recover it. 00:36:53.584 [2024-12-14 03:18:08.625747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.584 [2024-12-14 03:18:08.625809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.584 [2024-12-14 03:18:08.625822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.584 [2024-12-14 03:18:08.625828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.584 [2024-12-14 03:18:08.625835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.584 [2024-12-14 03:18:08.625850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.584 qpair failed and we were unable to recover it. 00:36:53.584 [2024-12-14 03:18:08.635719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.584 [2024-12-14 03:18:08.635775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.584 [2024-12-14 03:18:08.635805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.584 [2024-12-14 03:18:08.635813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.584 [2024-12-14 03:18:08.635820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.584 [2024-12-14 03:18:08.635841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.584 qpair failed and we were unable to recover it. 00:36:53.584 [2024-12-14 03:18:08.645786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.584 [2024-12-14 03:18:08.645846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.584 [2024-12-14 03:18:08.645860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.584 [2024-12-14 03:18:08.645867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.584 [2024-12-14 03:18:08.645874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.584 [2024-12-14 03:18:08.645889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.584 qpair failed and we were unable to recover it. 00:36:53.584 [2024-12-14 03:18:08.655798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.584 [2024-12-14 03:18:08.655847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.584 [2024-12-14 03:18:08.655861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.584 [2024-12-14 03:18:08.655868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.584 [2024-12-14 03:18:08.655874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.584 [2024-12-14 03:18:08.655889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.584 qpair failed and we were unable to recover it. 00:36:53.584 [2024-12-14 03:18:08.665826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.584 [2024-12-14 03:18:08.665920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.584 [2024-12-14 03:18:08.665934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.584 [2024-12-14 03:18:08.665940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.584 [2024-12-14 03:18:08.665947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.584 [2024-12-14 03:18:08.665962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.584 qpair failed and we were unable to recover it. 00:36:53.584 [2024-12-14 03:18:08.675862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.584 [2024-12-14 03:18:08.675914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.584 [2024-12-14 03:18:08.675928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.584 [2024-12-14 03:18:08.675935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.584 [2024-12-14 03:18:08.675944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.584 [2024-12-14 03:18:08.675959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.584 qpair failed and we were unable to recover it. 00:36:53.584 [2024-12-14 03:18:08.685893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.584 [2024-12-14 03:18:08.685949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.584 [2024-12-14 03:18:08.685962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.584 [2024-12-14 03:18:08.685969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.584 [2024-12-14 03:18:08.685975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.584 [2024-12-14 03:18:08.685990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.584 qpair failed and we were unable to recover it. 00:36:53.584 [2024-12-14 03:18:08.695948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.584 [2024-12-14 03:18:08.696026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.584 [2024-12-14 03:18:08.696041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.584 [2024-12-14 03:18:08.696048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.584 [2024-12-14 03:18:08.696054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.584 [2024-12-14 03:18:08.696070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.584 qpair failed and we were unable to recover it. 00:36:53.584 [2024-12-14 03:18:08.705965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.584 [2024-12-14 03:18:08.706019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.584 [2024-12-14 03:18:08.706033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.584 [2024-12-14 03:18:08.706039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.584 [2024-12-14 03:18:08.706045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.584 [2024-12-14 03:18:08.706060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.584 qpair failed and we were unable to recover it. 00:36:53.845 [2024-12-14 03:18:08.715982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.845 [2024-12-14 03:18:08.716037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.845 [2024-12-14 03:18:08.716050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.845 [2024-12-14 03:18:08.716057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.845 [2024-12-14 03:18:08.716064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.845 [2024-12-14 03:18:08.716080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-12-14 03:18:08.726006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.845 [2024-12-14 03:18:08.726059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.845 [2024-12-14 03:18:08.726072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.845 [2024-12-14 03:18:08.726079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.845 [2024-12-14 03:18:08.726085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.845 [2024-12-14 03:18:08.726100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-12-14 03:18:08.736065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.845 [2024-12-14 03:18:08.736135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.845 [2024-12-14 03:18:08.736148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.845 [2024-12-14 03:18:08.736155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.845 [2024-12-14 03:18:08.736161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.845 [2024-12-14 03:18:08.736177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-12-14 03:18:08.746075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.845 [2024-12-14 03:18:08.746172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.845 [2024-12-14 03:18:08.746186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.845 [2024-12-14 03:18:08.746193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.845 [2024-12-14 03:18:08.746199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.845 [2024-12-14 03:18:08.746214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-12-14 03:18:08.756100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.845 [2024-12-14 03:18:08.756156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.845 [2024-12-14 03:18:08.756169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.845 [2024-12-14 03:18:08.756176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.845 [2024-12-14 03:18:08.756182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.845 [2024-12-14 03:18:08.756197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-12-14 03:18:08.766145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.845 [2024-12-14 03:18:08.766234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.845 [2024-12-14 03:18:08.766247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.845 [2024-12-14 03:18:08.766254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.845 [2024-12-14 03:18:08.766260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.845 [2024-12-14 03:18:08.766275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-12-14 03:18:08.776155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.845 [2024-12-14 03:18:08.776209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.845 [2024-12-14 03:18:08.776223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.845 [2024-12-14 03:18:08.776230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.845 [2024-12-14 03:18:08.776236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.845 [2024-12-14 03:18:08.776251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-12-14 03:18:08.786189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.845 [2024-12-14 03:18:08.786281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.845 [2024-12-14 03:18:08.786295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.845 [2024-12-14 03:18:08.786302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.845 [2024-12-14 03:18:08.786308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.845 [2024-12-14 03:18:08.786329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-12-14 03:18:08.796219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.845 [2024-12-14 03:18:08.796277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.845 [2024-12-14 03:18:08.796290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.845 [2024-12-14 03:18:08.796296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.845 [2024-12-14 03:18:08.796303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.845 [2024-12-14 03:18:08.796322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-12-14 03:18:08.806263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.845 [2024-12-14 03:18:08.806326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.845 [2024-12-14 03:18:08.806340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.845 [2024-12-14 03:18:08.806350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.845 [2024-12-14 03:18:08.806356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.845 [2024-12-14 03:18:08.806371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.845 qpair failed and we were unable to recover it. 00:36:53.845 [2024-12-14 03:18:08.816270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.845 [2024-12-14 03:18:08.816324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.845 [2024-12-14 03:18:08.816338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.846 [2024-12-14 03:18:08.816345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.846 [2024-12-14 03:18:08.816351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.846 [2024-12-14 03:18:08.816366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-12-14 03:18:08.826303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.846 [2024-12-14 03:18:08.826371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.846 [2024-12-14 03:18:08.826384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.846 [2024-12-14 03:18:08.826391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.846 [2024-12-14 03:18:08.826397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.846 [2024-12-14 03:18:08.826412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-12-14 03:18:08.836334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.846 [2024-12-14 03:18:08.836391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.846 [2024-12-14 03:18:08.836404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.846 [2024-12-14 03:18:08.836411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.846 [2024-12-14 03:18:08.836417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.846 [2024-12-14 03:18:08.836433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-12-14 03:18:08.846349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.846 [2024-12-14 03:18:08.846404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.846 [2024-12-14 03:18:08.846417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.846 [2024-12-14 03:18:08.846424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.846 [2024-12-14 03:18:08.846430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.846 [2024-12-14 03:18:08.846448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-12-14 03:18:08.856430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.846 [2024-12-14 03:18:08.856486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.846 [2024-12-14 03:18:08.856499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.846 [2024-12-14 03:18:08.856506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.846 [2024-12-14 03:18:08.856513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.846 [2024-12-14 03:18:08.856528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-12-14 03:18:08.866420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.846 [2024-12-14 03:18:08.866479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.846 [2024-12-14 03:18:08.866493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.846 [2024-12-14 03:18:08.866500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.846 [2024-12-14 03:18:08.866507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.846 [2024-12-14 03:18:08.866522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-12-14 03:18:08.876481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.846 [2024-12-14 03:18:08.876543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.846 [2024-12-14 03:18:08.876556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.846 [2024-12-14 03:18:08.876563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.846 [2024-12-14 03:18:08.876570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.846 [2024-12-14 03:18:08.876585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-12-14 03:18:08.886466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.846 [2024-12-14 03:18:08.886542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.846 [2024-12-14 03:18:08.886555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.846 [2024-12-14 03:18:08.886562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.846 [2024-12-14 03:18:08.886568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.846 [2024-12-14 03:18:08.886582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-12-14 03:18:08.896491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.846 [2024-12-14 03:18:08.896548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.846 [2024-12-14 03:18:08.896561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.846 [2024-12-14 03:18:08.896568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.846 [2024-12-14 03:18:08.896574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.846 [2024-12-14 03:18:08.896589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-12-14 03:18:08.906508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.846 [2024-12-14 03:18:08.906566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.846 [2024-12-14 03:18:08.906579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.846 [2024-12-14 03:18:08.906585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.846 [2024-12-14 03:18:08.906592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.846 [2024-12-14 03:18:08.906607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-12-14 03:18:08.916549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.846 [2024-12-14 03:18:08.916607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.846 [2024-12-14 03:18:08.916620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.846 [2024-12-14 03:18:08.916627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.846 [2024-12-14 03:18:08.916633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.846 [2024-12-14 03:18:08.916647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-12-14 03:18:08.926574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.846 [2024-12-14 03:18:08.926623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.846 [2024-12-14 03:18:08.926637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.846 [2024-12-14 03:18:08.926643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.846 [2024-12-14 03:18:08.926650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.846 [2024-12-14 03:18:08.926665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-12-14 03:18:08.936607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.846 [2024-12-14 03:18:08.936660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.846 [2024-12-14 03:18:08.936676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.846 [2024-12-14 03:18:08.936683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.846 [2024-12-14 03:18:08.936689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.846 [2024-12-14 03:18:08.936705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.846 qpair failed and we were unable to recover it. 00:36:53.846 [2024-12-14 03:18:08.946656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.846 [2024-12-14 03:18:08.946715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.846 [2024-12-14 03:18:08.946730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.846 [2024-12-14 03:18:08.946737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.846 [2024-12-14 03:18:08.946743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.846 [2024-12-14 03:18:08.946758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-12-14 03:18:08.956665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.847 [2024-12-14 03:18:08.956725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.847 [2024-12-14 03:18:08.956738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.847 [2024-12-14 03:18:08.956746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.847 [2024-12-14 03:18:08.956753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.847 [2024-12-14 03:18:08.956769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.847 qpair failed and we were unable to recover it. 00:36:53.847 [2024-12-14 03:18:08.966623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.847 [2024-12-14 03:18:08.966677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.847 [2024-12-14 03:18:08.966690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.847 [2024-12-14 03:18:08.966697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.847 [2024-12-14 03:18:08.966704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:53.847 [2024-12-14 03:18:08.966720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:53.847 qpair failed and we were unable to recover it. 00:36:54.107 [2024-12-14 03:18:08.976641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.107 [2024-12-14 03:18:08.976698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.107 [2024-12-14 03:18:08.976711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.107 [2024-12-14 03:18:08.976719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.107 [2024-12-14 03:18:08.976726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.107 [2024-12-14 03:18:08.976744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.107 qpair failed and we were unable to recover it. 00:36:54.107 [2024-12-14 03:18:08.986755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.107 [2024-12-14 03:18:08.986809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.107 [2024-12-14 03:18:08.986822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.107 [2024-12-14 03:18:08.986829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.107 [2024-12-14 03:18:08.986835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.107 [2024-12-14 03:18:08.986851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.107 qpair failed and we were unable to recover it. 00:36:54.107 [2024-12-14 03:18:08.996754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.107 [2024-12-14 03:18:08.996811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.107 [2024-12-14 03:18:08.996823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.107 [2024-12-14 03:18:08.996830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.107 [2024-12-14 03:18:08.996836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.107 [2024-12-14 03:18:08.996851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.107 qpair failed and we were unable to recover it. 00:36:54.107 [2024-12-14 03:18:09.006793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.107 [2024-12-14 03:18:09.006853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.107 [2024-12-14 03:18:09.006866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.107 [2024-12-14 03:18:09.006873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.107 [2024-12-14 03:18:09.006879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.107 [2024-12-14 03:18:09.006895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.107 qpair failed and we were unable to recover it. 00:36:54.107 [2024-12-14 03:18:09.016782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.107 [2024-12-14 03:18:09.016881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.107 [2024-12-14 03:18:09.016894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.107 [2024-12-14 03:18:09.016901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.107 [2024-12-14 03:18:09.016907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.107 [2024-12-14 03:18:09.016922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.107 qpair failed and we were unable to recover it. 00:36:54.107 [2024-12-14 03:18:09.026865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.107 [2024-12-14 03:18:09.026923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.107 [2024-12-14 03:18:09.026936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.107 [2024-12-14 03:18:09.026943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.107 [2024-12-14 03:18:09.026949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.107 [2024-12-14 03:18:09.026964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.107 qpair failed and we were unable to recover it. 00:36:54.107 [2024-12-14 03:18:09.036894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.107 [2024-12-14 03:18:09.036948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.107 [2024-12-14 03:18:09.036961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.107 [2024-12-14 03:18:09.036968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.107 [2024-12-14 03:18:09.036974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.107 [2024-12-14 03:18:09.036990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.107 qpair failed and we were unable to recover it. 00:36:54.107 [2024-12-14 03:18:09.046950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.107 [2024-12-14 03:18:09.047018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.107 [2024-12-14 03:18:09.047033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.107 [2024-12-14 03:18:09.047040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.107 [2024-12-14 03:18:09.047046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.107 [2024-12-14 03:18:09.047061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.107 qpair failed and we were unable to recover it. 00:36:54.107 [2024-12-14 03:18:09.056956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.108 [2024-12-14 03:18:09.057014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.108 [2024-12-14 03:18:09.057027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.108 [2024-12-14 03:18:09.057034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.108 [2024-12-14 03:18:09.057041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.108 [2024-12-14 03:18:09.057057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.108 qpair failed and we were unable to recover it. 00:36:54.108 [2024-12-14 03:18:09.066987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.108 [2024-12-14 03:18:09.067042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.108 [2024-12-14 03:18:09.067058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.108 [2024-12-14 03:18:09.067065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.108 [2024-12-14 03:18:09.067072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.108 [2024-12-14 03:18:09.067088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.108 qpair failed and we were unable to recover it. 00:36:54.108 [2024-12-14 03:18:09.076951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.108 [2024-12-14 03:18:09.077005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.108 [2024-12-14 03:18:09.077018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.108 [2024-12-14 03:18:09.077024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.108 [2024-12-14 03:18:09.077030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.108 [2024-12-14 03:18:09.077045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.108 qpair failed and we were unable to recover it. 00:36:54.108 [2024-12-14 03:18:09.087051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.108 [2024-12-14 03:18:09.087106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.108 [2024-12-14 03:18:09.087119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.108 [2024-12-14 03:18:09.087125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.108 [2024-12-14 03:18:09.087132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.108 [2024-12-14 03:18:09.087147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.108 qpair failed and we were unable to recover it. 00:36:54.108 [2024-12-14 03:18:09.097060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.108 [2024-12-14 03:18:09.097111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.108 [2024-12-14 03:18:09.097124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.108 [2024-12-14 03:18:09.097131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.108 [2024-12-14 03:18:09.097137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.108 [2024-12-14 03:18:09.097152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.108 qpair failed and we were unable to recover it. 00:36:54.108 [2024-12-14 03:18:09.107119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.108 [2024-12-14 03:18:09.107176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.108 [2024-12-14 03:18:09.107189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.108 [2024-12-14 03:18:09.107196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.108 [2024-12-14 03:18:09.107206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.108 [2024-12-14 03:18:09.107221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.108 qpair failed and we were unable to recover it. 00:36:54.108 [2024-12-14 03:18:09.117119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.108 [2024-12-14 03:18:09.117173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.108 [2024-12-14 03:18:09.117186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.108 [2024-12-14 03:18:09.117193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.108 [2024-12-14 03:18:09.117199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.108 [2024-12-14 03:18:09.117214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.108 qpair failed and we were unable to recover it. 00:36:54.108 [2024-12-14 03:18:09.127140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.108 [2024-12-14 03:18:09.127196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.108 [2024-12-14 03:18:09.127209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.108 [2024-12-14 03:18:09.127215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.108 [2024-12-14 03:18:09.127222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.108 [2024-12-14 03:18:09.127237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.108 qpair failed and we were unable to recover it. 00:36:54.108 [2024-12-14 03:18:09.137179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.108 [2024-12-14 03:18:09.137235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.108 [2024-12-14 03:18:09.137247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.108 [2024-12-14 03:18:09.137254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.108 [2024-12-14 03:18:09.137260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.108 [2024-12-14 03:18:09.137276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.108 qpair failed and we were unable to recover it. 00:36:54.108 [2024-12-14 03:18:09.147217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.108 [2024-12-14 03:18:09.147274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.108 [2024-12-14 03:18:09.147287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.108 [2024-12-14 03:18:09.147294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.108 [2024-12-14 03:18:09.147301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.108 [2024-12-14 03:18:09.147320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.108 qpair failed and we were unable to recover it. 00:36:54.108 [2024-12-14 03:18:09.157240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.108 [2024-12-14 03:18:09.157297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.108 [2024-12-14 03:18:09.157310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.108 [2024-12-14 03:18:09.157320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.108 [2024-12-14 03:18:09.157326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.108 [2024-12-14 03:18:09.157342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.108 qpair failed and we were unable to recover it. 00:36:54.108 [2024-12-14 03:18:09.167274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.108 [2024-12-14 03:18:09.167330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.108 [2024-12-14 03:18:09.167343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.108 [2024-12-14 03:18:09.167350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.108 [2024-12-14 03:18:09.167357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.108 [2024-12-14 03:18:09.167372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.108 qpair failed and we were unable to recover it. 00:36:54.108 [2024-12-14 03:18:09.177306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.108 [2024-12-14 03:18:09.177363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.108 [2024-12-14 03:18:09.177375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.108 [2024-12-14 03:18:09.177382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.108 [2024-12-14 03:18:09.177389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.108 [2024-12-14 03:18:09.177404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.108 qpair failed and we were unable to recover it. 00:36:54.108 [2024-12-14 03:18:09.187318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.108 [2024-12-14 03:18:09.187389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.108 [2024-12-14 03:18:09.187402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.109 [2024-12-14 03:18:09.187409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.109 [2024-12-14 03:18:09.187416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.109 [2024-12-14 03:18:09.187430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.109 qpair failed and we were unable to recover it. 00:36:54.109 [2024-12-14 03:18:09.197408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.109 [2024-12-14 03:18:09.197509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.109 [2024-12-14 03:18:09.197526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.109 [2024-12-14 03:18:09.197533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.109 [2024-12-14 03:18:09.197539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.109 [2024-12-14 03:18:09.197554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.109 qpair failed and we were unable to recover it. 00:36:54.109 [2024-12-14 03:18:09.207511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.109 [2024-12-14 03:18:09.207573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.109 [2024-12-14 03:18:09.207587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.109 [2024-12-14 03:18:09.207593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.109 [2024-12-14 03:18:09.207600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.109 [2024-12-14 03:18:09.207615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.109 qpair failed and we were unable to recover it. 00:36:54.109 [2024-12-14 03:18:09.217458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.109 [2024-12-14 03:18:09.217513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.109 [2024-12-14 03:18:09.217526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.109 [2024-12-14 03:18:09.217533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.109 [2024-12-14 03:18:09.217539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.109 [2024-12-14 03:18:09.217554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.109 qpair failed and we were unable to recover it. 00:36:54.109 [2024-12-14 03:18:09.227481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.109 [2024-12-14 03:18:09.227538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.109 [2024-12-14 03:18:09.227551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.109 [2024-12-14 03:18:09.227558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.109 [2024-12-14 03:18:09.227564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.109 [2024-12-14 03:18:09.227579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.109 qpair failed and we were unable to recover it. 00:36:54.109 [2024-12-14 03:18:09.237520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.109 [2024-12-14 03:18:09.237576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.109 [2024-12-14 03:18:09.237589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.109 [2024-12-14 03:18:09.237598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.109 [2024-12-14 03:18:09.237605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.109 [2024-12-14 03:18:09.237620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.109 qpair failed and we were unable to recover it. 00:36:54.369 [2024-12-14 03:18:09.247505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.369 [2024-12-14 03:18:09.247557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.369 [2024-12-14 03:18:09.247569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.369 [2024-12-14 03:18:09.247575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.369 [2024-12-14 03:18:09.247582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.369 [2024-12-14 03:18:09.247597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.369 qpair failed and we were unable to recover it. 00:36:54.369 [2024-12-14 03:18:09.257543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.369 [2024-12-14 03:18:09.257614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.369 [2024-12-14 03:18:09.257627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.369 [2024-12-14 03:18:09.257634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.369 [2024-12-14 03:18:09.257640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.369 [2024-12-14 03:18:09.257656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.369 qpair failed and we were unable to recover it. 00:36:54.369 [2024-12-14 03:18:09.267537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.369 [2024-12-14 03:18:09.267593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.369 [2024-12-14 03:18:09.267605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.369 [2024-12-14 03:18:09.267612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.369 [2024-12-14 03:18:09.267619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.369 [2024-12-14 03:18:09.267634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.369 qpair failed and we were unable to recover it. 00:36:54.369 [2024-12-14 03:18:09.277592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.369 [2024-12-14 03:18:09.277647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.369 [2024-12-14 03:18:09.277661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.369 [2024-12-14 03:18:09.277668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.369 [2024-12-14 03:18:09.277674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.369 [2024-12-14 03:18:09.277689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.369 qpair failed and we were unable to recover it. 00:36:54.369 [2024-12-14 03:18:09.287614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.369 [2024-12-14 03:18:09.287664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.369 [2024-12-14 03:18:09.287677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.369 [2024-12-14 03:18:09.287683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.369 [2024-12-14 03:18:09.287690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.369 [2024-12-14 03:18:09.287705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.369 qpair failed and we were unable to recover it. 00:36:54.369 [2024-12-14 03:18:09.297642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.369 [2024-12-14 03:18:09.297697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.369 [2024-12-14 03:18:09.297711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.369 [2024-12-14 03:18:09.297717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.369 [2024-12-14 03:18:09.297724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.369 [2024-12-14 03:18:09.297739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.369 qpair failed and we were unable to recover it. 00:36:54.369 [2024-12-14 03:18:09.307657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.369 [2024-12-14 03:18:09.307716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.369 [2024-12-14 03:18:09.307729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.370 [2024-12-14 03:18:09.307736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.370 [2024-12-14 03:18:09.307742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.370 [2024-12-14 03:18:09.307757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.370 qpair failed and we were unable to recover it. 00:36:54.370 [2024-12-14 03:18:09.317705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.370 [2024-12-14 03:18:09.317762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.370 [2024-12-14 03:18:09.317775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.370 [2024-12-14 03:18:09.317781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.370 [2024-12-14 03:18:09.317788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.370 [2024-12-14 03:18:09.317803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.370 qpair failed and we were unable to recover it. 00:36:54.370 [2024-12-14 03:18:09.327658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.370 [2024-12-14 03:18:09.327713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.370 [2024-12-14 03:18:09.327726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.370 [2024-12-14 03:18:09.327732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.370 [2024-12-14 03:18:09.327739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.370 [2024-12-14 03:18:09.327755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.370 qpair failed and we were unable to recover it. 00:36:54.370 [2024-12-14 03:18:09.337746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.370 [2024-12-14 03:18:09.337798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.370 [2024-12-14 03:18:09.337811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.370 [2024-12-14 03:18:09.337818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.370 [2024-12-14 03:18:09.337824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.370 [2024-12-14 03:18:09.337840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.370 qpair failed and we were unable to recover it. 00:36:54.370 [2024-12-14 03:18:09.347711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.370 [2024-12-14 03:18:09.347768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.370 [2024-12-14 03:18:09.347781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.370 [2024-12-14 03:18:09.347788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.370 [2024-12-14 03:18:09.347794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.370 [2024-12-14 03:18:09.347809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.370 qpair failed and we were unable to recover it. 00:36:54.370 [2024-12-14 03:18:09.357808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.370 [2024-12-14 03:18:09.357870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.370 [2024-12-14 03:18:09.357883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.370 [2024-12-14 03:18:09.357890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.370 [2024-12-14 03:18:09.357897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.370 [2024-12-14 03:18:09.357911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.370 qpair failed and we were unable to recover it. 00:36:54.370 [2024-12-14 03:18:09.367848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.370 [2024-12-14 03:18:09.367908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.370 [2024-12-14 03:18:09.367920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.370 [2024-12-14 03:18:09.367932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.370 [2024-12-14 03:18:09.367938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.370 [2024-12-14 03:18:09.367953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.370 qpair failed and we were unable to recover it. 00:36:54.370 [2024-12-14 03:18:09.377838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.370 [2024-12-14 03:18:09.377893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.370 [2024-12-14 03:18:09.377906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.370 [2024-12-14 03:18:09.377915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.370 [2024-12-14 03:18:09.377922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.370 [2024-12-14 03:18:09.377937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.370 qpair failed and we were unable to recover it. 00:36:54.370 [2024-12-14 03:18:09.387903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.370 [2024-12-14 03:18:09.387958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.370 [2024-12-14 03:18:09.387971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.370 [2024-12-14 03:18:09.387978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.370 [2024-12-14 03:18:09.387984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.370 [2024-12-14 03:18:09.388000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.370 qpair failed and we were unable to recover it. 00:36:54.370 [2024-12-14 03:18:09.397898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.370 [2024-12-14 03:18:09.397954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.370 [2024-12-14 03:18:09.397967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.370 [2024-12-14 03:18:09.397974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.370 [2024-12-14 03:18:09.397980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.370 [2024-12-14 03:18:09.397995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.370 qpair failed and we were unable to recover it. 00:36:54.370 [2024-12-14 03:18:09.407951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.370 [2024-12-14 03:18:09.408007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.370 [2024-12-14 03:18:09.408020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.370 [2024-12-14 03:18:09.408027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.370 [2024-12-14 03:18:09.408033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.370 [2024-12-14 03:18:09.408051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.370 qpair failed and we were unable to recover it. 00:36:54.370 [2024-12-14 03:18:09.417988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.370 [2024-12-14 03:18:09.418039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.370 [2024-12-14 03:18:09.418053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.370 [2024-12-14 03:18:09.418060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.370 [2024-12-14 03:18:09.418066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.370 [2024-12-14 03:18:09.418081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.370 qpair failed and we were unable to recover it. 00:36:54.370 [2024-12-14 03:18:09.428107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.370 [2024-12-14 03:18:09.428166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.370 [2024-12-14 03:18:09.428181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.370 [2024-12-14 03:18:09.428189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.370 [2024-12-14 03:18:09.428196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.370 [2024-12-14 03:18:09.428212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.370 qpair failed and we were unable to recover it. 00:36:54.370 [2024-12-14 03:18:09.438104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.370 [2024-12-14 03:18:09.438163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.370 [2024-12-14 03:18:09.438178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.370 [2024-12-14 03:18:09.438186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.370 [2024-12-14 03:18:09.438194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.371 [2024-12-14 03:18:09.438211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.371 qpair failed and we were unable to recover it. 00:36:54.371 [2024-12-14 03:18:09.448084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.371 [2024-12-14 03:18:09.448143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.371 [2024-12-14 03:18:09.448157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.371 [2024-12-14 03:18:09.448164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.371 [2024-12-14 03:18:09.448172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.371 [2024-12-14 03:18:09.448188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.371 qpair failed and we were unable to recover it. 00:36:54.371 [2024-12-14 03:18:09.458106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.371 [2024-12-14 03:18:09.458160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.371 [2024-12-14 03:18:09.458174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.371 [2024-12-14 03:18:09.458182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.371 [2024-12-14 03:18:09.458189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.371 [2024-12-14 03:18:09.458206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.371 qpair failed and we were unable to recover it. 00:36:54.371 [2024-12-14 03:18:09.468150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.371 [2024-12-14 03:18:09.468206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.371 [2024-12-14 03:18:09.468221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.371 [2024-12-14 03:18:09.468229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.371 [2024-12-14 03:18:09.468237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.371 [2024-12-14 03:18:09.468254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.371 qpair failed and we were unable to recover it. 00:36:54.371 [2024-12-14 03:18:09.478159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.371 [2024-12-14 03:18:09.478210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.371 [2024-12-14 03:18:09.478225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.371 [2024-12-14 03:18:09.478233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.371 [2024-12-14 03:18:09.478240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.371 [2024-12-14 03:18:09.478257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.371 qpair failed and we were unable to recover it. 00:36:54.371 [2024-12-14 03:18:09.488235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.371 [2024-12-14 03:18:09.488294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.371 [2024-12-14 03:18:09.488307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.371 [2024-12-14 03:18:09.488320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.371 [2024-12-14 03:18:09.488329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.371 [2024-12-14 03:18:09.488346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.371 qpair failed and we were unable to recover it. 00:36:54.371 [2024-12-14 03:18:09.498227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.371 [2024-12-14 03:18:09.498282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.371 [2024-12-14 03:18:09.498301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.371 [2024-12-14 03:18:09.498310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.371 [2024-12-14 03:18:09.498324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.371 [2024-12-14 03:18:09.498341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.371 qpair failed and we were unable to recover it. 00:36:54.631 [2024-12-14 03:18:09.508291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.631 [2024-12-14 03:18:09.508363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.631 [2024-12-14 03:18:09.508378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.631 [2024-12-14 03:18:09.508387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.631 [2024-12-14 03:18:09.508394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.631 [2024-12-14 03:18:09.508410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.631 qpair failed and we were unable to recover it. 00:36:54.631 [2024-12-14 03:18:09.518296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.631 [2024-12-14 03:18:09.518360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.631 [2024-12-14 03:18:09.518375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.631 [2024-12-14 03:18:09.518383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.631 [2024-12-14 03:18:09.518391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.631 [2024-12-14 03:18:09.518407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.631 qpair failed and we were unable to recover it. 00:36:54.631 [2024-12-14 03:18:09.528340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.631 [2024-12-14 03:18:09.528414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.631 [2024-12-14 03:18:09.528429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.631 [2024-12-14 03:18:09.528437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.631 [2024-12-14 03:18:09.528445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.631 [2024-12-14 03:18:09.528462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.631 qpair failed and we were unable to recover it. 00:36:54.631 [2024-12-14 03:18:09.538343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.631 [2024-12-14 03:18:09.538393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.631 [2024-12-14 03:18:09.538408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.631 [2024-12-14 03:18:09.538417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.631 [2024-12-14 03:18:09.538424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.631 [2024-12-14 03:18:09.538444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.631 qpair failed and we were unable to recover it. 00:36:54.631 [2024-12-14 03:18:09.548389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.631 [2024-12-14 03:18:09.548451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.631 [2024-12-14 03:18:09.548466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.631 [2024-12-14 03:18:09.548475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.631 [2024-12-14 03:18:09.548483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.632 [2024-12-14 03:18:09.548500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.632 qpair failed and we were unable to recover it. 00:36:54.632 [2024-12-14 03:18:09.558428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.632 [2024-12-14 03:18:09.558482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.632 [2024-12-14 03:18:09.558498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.632 [2024-12-14 03:18:09.558506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.632 [2024-12-14 03:18:09.558514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.632 [2024-12-14 03:18:09.558531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.632 qpair failed and we were unable to recover it. 00:36:54.632 [2024-12-14 03:18:09.568434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.632 [2024-12-14 03:18:09.568493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.632 [2024-12-14 03:18:09.568508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.632 [2024-12-14 03:18:09.568517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.632 [2024-12-14 03:18:09.568525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.632 [2024-12-14 03:18:09.568541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.632 qpair failed and we were unable to recover it. 00:36:54.632 [2024-12-14 03:18:09.578476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.632 [2024-12-14 03:18:09.578530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.632 [2024-12-14 03:18:09.578545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.632 [2024-12-14 03:18:09.578553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.632 [2024-12-14 03:18:09.578562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.632 [2024-12-14 03:18:09.578579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.632 qpair failed and we were unable to recover it. 00:36:54.632 [2024-12-14 03:18:09.588489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.632 [2024-12-14 03:18:09.588567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.632 [2024-12-14 03:18:09.588582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.632 [2024-12-14 03:18:09.588591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.632 [2024-12-14 03:18:09.588600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.632 [2024-12-14 03:18:09.588618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.632 qpair failed and we were unable to recover it. 00:36:54.632 [2024-12-14 03:18:09.598519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.632 [2024-12-14 03:18:09.598575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.632 [2024-12-14 03:18:09.598590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.632 [2024-12-14 03:18:09.598600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.632 [2024-12-14 03:18:09.598607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.632 [2024-12-14 03:18:09.598624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.632 qpair failed and we were unable to recover it. 00:36:54.632 [2024-12-14 03:18:09.608558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.632 [2024-12-14 03:18:09.608613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.632 [2024-12-14 03:18:09.608628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.632 [2024-12-14 03:18:09.608636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.632 [2024-12-14 03:18:09.608644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.632 [2024-12-14 03:18:09.608660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.632 qpair failed and we were unable to recover it. 00:36:54.632 [2024-12-14 03:18:09.618617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.632 [2024-12-14 03:18:09.618671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.632 [2024-12-14 03:18:09.618686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.632 [2024-12-14 03:18:09.618695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.632 [2024-12-14 03:18:09.618703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.632 [2024-12-14 03:18:09.618719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.632 qpair failed and we were unable to recover it. 00:36:54.632 [2024-12-14 03:18:09.628633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.632 [2024-12-14 03:18:09.628727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.632 [2024-12-14 03:18:09.628746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.632 [2024-12-14 03:18:09.628754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.632 [2024-12-14 03:18:09.628762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.632 [2024-12-14 03:18:09.628779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.632 qpair failed and we were unable to recover it. 00:36:54.632 [2024-12-14 03:18:09.638643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.632 [2024-12-14 03:18:09.638699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.632 [2024-12-14 03:18:09.638714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.632 [2024-12-14 03:18:09.638723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.632 [2024-12-14 03:18:09.638731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.632 [2024-12-14 03:18:09.638748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.632 qpair failed and we were unable to recover it. 00:36:54.632 [2024-12-14 03:18:09.648665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.632 [2024-12-14 03:18:09.648722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.632 [2024-12-14 03:18:09.648737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.632 [2024-12-14 03:18:09.648746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.632 [2024-12-14 03:18:09.648754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.632 [2024-12-14 03:18:09.648771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.632 qpair failed and we were unable to recover it. 00:36:54.632 [2024-12-14 03:18:09.658687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.632 [2024-12-14 03:18:09.658748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.632 [2024-12-14 03:18:09.658763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.632 [2024-12-14 03:18:09.658772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.632 [2024-12-14 03:18:09.658780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.632 [2024-12-14 03:18:09.658797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.632 qpair failed and we were unable to recover it. 00:36:54.632 [2024-12-14 03:18:09.668714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.632 [2024-12-14 03:18:09.668772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.632 [2024-12-14 03:18:09.668786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.632 [2024-12-14 03:18:09.668795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.632 [2024-12-14 03:18:09.668806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.632 [2024-12-14 03:18:09.668823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.632 qpair failed and we were unable to recover it. 00:36:54.632 [2024-12-14 03:18:09.678806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.632 [2024-12-14 03:18:09.678905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.632 [2024-12-14 03:18:09.678920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.632 [2024-12-14 03:18:09.678928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.632 [2024-12-14 03:18:09.678936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.632 [2024-12-14 03:18:09.678952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.633 qpair failed and we were unable to recover it. 00:36:54.633 [2024-12-14 03:18:09.688795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.633 [2024-12-14 03:18:09.688900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.633 [2024-12-14 03:18:09.688914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.633 [2024-12-14 03:18:09.688923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.633 [2024-12-14 03:18:09.688931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.633 [2024-12-14 03:18:09.688949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.633 qpair failed and we were unable to recover it. 00:36:54.633 [2024-12-14 03:18:09.698725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.633 [2024-12-14 03:18:09.698779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.633 [2024-12-14 03:18:09.698794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.633 [2024-12-14 03:18:09.698802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.633 [2024-12-14 03:18:09.698810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.633 [2024-12-14 03:18:09.698827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.633 qpair failed and we were unable to recover it. 00:36:54.633 [2024-12-14 03:18:09.708900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.633 [2024-12-14 03:18:09.709001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.633 [2024-12-14 03:18:09.709014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.633 [2024-12-14 03:18:09.709023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.633 [2024-12-14 03:18:09.709030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.633 [2024-12-14 03:18:09.709046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.633 qpair failed and we were unable to recover it. 00:36:54.633 [2024-12-14 03:18:09.718871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.633 [2024-12-14 03:18:09.718926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.633 [2024-12-14 03:18:09.718939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.633 [2024-12-14 03:18:09.718946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.633 [2024-12-14 03:18:09.718952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.633 [2024-12-14 03:18:09.718968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.633 qpair failed and we were unable to recover it. 00:36:54.633 [2024-12-14 03:18:09.728897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.633 [2024-12-14 03:18:09.728950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.633 [2024-12-14 03:18:09.728962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.633 [2024-12-14 03:18:09.728969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.633 [2024-12-14 03:18:09.728975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.633 [2024-12-14 03:18:09.728990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.633 qpair failed and we were unable to recover it. 00:36:54.633 [2024-12-14 03:18:09.738934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.633 [2024-12-14 03:18:09.738988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.633 [2024-12-14 03:18:09.739002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.633 [2024-12-14 03:18:09.739009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.633 [2024-12-14 03:18:09.739015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.633 [2024-12-14 03:18:09.739031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.633 qpair failed and we were unable to recover it. 00:36:54.633 [2024-12-14 03:18:09.748962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.633 [2024-12-14 03:18:09.749018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.633 [2024-12-14 03:18:09.749031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.633 [2024-12-14 03:18:09.749038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.633 [2024-12-14 03:18:09.749044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.633 [2024-12-14 03:18:09.749060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.633 qpair failed and we were unable to recover it. 00:36:54.633 [2024-12-14 03:18:09.758981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.633 [2024-12-14 03:18:09.759041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.633 [2024-12-14 03:18:09.759058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.633 [2024-12-14 03:18:09.759065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.633 [2024-12-14 03:18:09.759071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.633 [2024-12-14 03:18:09.759086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.633 qpair failed and we were unable to recover it. 00:36:54.895 [2024-12-14 03:18:09.769014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.895 [2024-12-14 03:18:09.769070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.895 [2024-12-14 03:18:09.769083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.895 [2024-12-14 03:18:09.769090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.895 [2024-12-14 03:18:09.769097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.895 [2024-12-14 03:18:09.769111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.895 qpair failed and we were unable to recover it. 00:36:54.895 [2024-12-14 03:18:09.779025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.895 [2024-12-14 03:18:09.779077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.895 [2024-12-14 03:18:09.779090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.895 [2024-12-14 03:18:09.779097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.895 [2024-12-14 03:18:09.779104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.895 [2024-12-14 03:18:09.779119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.895 qpair failed and we were unable to recover it. 00:36:54.895 [2024-12-14 03:18:09.789067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.895 [2024-12-14 03:18:09.789122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.895 [2024-12-14 03:18:09.789135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.895 [2024-12-14 03:18:09.789141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.895 [2024-12-14 03:18:09.789148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.895 [2024-12-14 03:18:09.789164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.895 qpair failed and we were unable to recover it. 00:36:54.895 [2024-12-14 03:18:09.799090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.895 [2024-12-14 03:18:09.799167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.895 [2024-12-14 03:18:09.799181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.895 [2024-12-14 03:18:09.799191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.895 [2024-12-14 03:18:09.799197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.895 [2024-12-14 03:18:09.799212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.895 qpair failed and we were unable to recover it. 00:36:54.895 [2024-12-14 03:18:09.809127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.895 [2024-12-14 03:18:09.809183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.895 [2024-12-14 03:18:09.809196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.895 [2024-12-14 03:18:09.809204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.895 [2024-12-14 03:18:09.809211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.895 [2024-12-14 03:18:09.809227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.895 qpair failed and we were unable to recover it. 00:36:54.895 [2024-12-14 03:18:09.819149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.895 [2024-12-14 03:18:09.819202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.895 [2024-12-14 03:18:09.819215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.895 [2024-12-14 03:18:09.819222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.895 [2024-12-14 03:18:09.819228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.895 [2024-12-14 03:18:09.819244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.895 qpair failed and we were unable to recover it. 00:36:54.895 [2024-12-14 03:18:09.829174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.895 [2024-12-14 03:18:09.829249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.895 [2024-12-14 03:18:09.829262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.895 [2024-12-14 03:18:09.829268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.895 [2024-12-14 03:18:09.829274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.895 [2024-12-14 03:18:09.829289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.895 qpair failed and we were unable to recover it. 00:36:54.895 [2024-12-14 03:18:09.839205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.895 [2024-12-14 03:18:09.839261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.895 [2024-12-14 03:18:09.839274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.895 [2024-12-14 03:18:09.839281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.895 [2024-12-14 03:18:09.839287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.895 [2024-12-14 03:18:09.839302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.895 qpair failed and we were unable to recover it. 00:36:54.895 [2024-12-14 03:18:09.849279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.895 [2024-12-14 03:18:09.849341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.895 [2024-12-14 03:18:09.849354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.895 [2024-12-14 03:18:09.849361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.895 [2024-12-14 03:18:09.849368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.895 [2024-12-14 03:18:09.849383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.895 qpair failed and we were unable to recover it. 00:36:54.895 [2024-12-14 03:18:09.859283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.895 [2024-12-14 03:18:09.859341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.895 [2024-12-14 03:18:09.859354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.895 [2024-12-14 03:18:09.859361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.895 [2024-12-14 03:18:09.859368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.895 [2024-12-14 03:18:09.859383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.895 qpair failed and we were unable to recover it. 00:36:54.895 [2024-12-14 03:18:09.869292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.895 [2024-12-14 03:18:09.869363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.895 [2024-12-14 03:18:09.869376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.895 [2024-12-14 03:18:09.869383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.895 [2024-12-14 03:18:09.869389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.895 [2024-12-14 03:18:09.869405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.895 qpair failed and we were unable to recover it. 00:36:54.895 [2024-12-14 03:18:09.879317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.896 [2024-12-14 03:18:09.879371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.896 [2024-12-14 03:18:09.879384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.896 [2024-12-14 03:18:09.879391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.896 [2024-12-14 03:18:09.879397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.896 [2024-12-14 03:18:09.879413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.896 qpair failed and we were unable to recover it. 00:36:54.896 [2024-12-14 03:18:09.889346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.896 [2024-12-14 03:18:09.889399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.896 [2024-12-14 03:18:09.889412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.896 [2024-12-14 03:18:09.889419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.896 [2024-12-14 03:18:09.889425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.896 [2024-12-14 03:18:09.889441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.896 qpair failed and we were unable to recover it. 00:36:54.896 [2024-12-14 03:18:09.899381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.896 [2024-12-14 03:18:09.899464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.896 [2024-12-14 03:18:09.899477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.896 [2024-12-14 03:18:09.899484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.896 [2024-12-14 03:18:09.899490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.896 [2024-12-14 03:18:09.899505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.896 qpair failed and we were unable to recover it. 00:36:54.896 [2024-12-14 03:18:09.909440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.896 [2024-12-14 03:18:09.909547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.896 [2024-12-14 03:18:09.909561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.896 [2024-12-14 03:18:09.909568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.896 [2024-12-14 03:18:09.909574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.896 [2024-12-14 03:18:09.909589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.896 qpair failed and we were unable to recover it. 00:36:54.896 [2024-12-14 03:18:09.919366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.896 [2024-12-14 03:18:09.919421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.896 [2024-12-14 03:18:09.919434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.896 [2024-12-14 03:18:09.919440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.896 [2024-12-14 03:18:09.919447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.896 [2024-12-14 03:18:09.919461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.896 qpair failed and we were unable to recover it. 00:36:54.896 [2024-12-14 03:18:09.929465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.896 [2024-12-14 03:18:09.929517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.896 [2024-12-14 03:18:09.929530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.896 [2024-12-14 03:18:09.929540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.896 [2024-12-14 03:18:09.929546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.896 [2024-12-14 03:18:09.929561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.896 qpair failed and we were unable to recover it. 00:36:54.896 [2024-12-14 03:18:09.939500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.896 [2024-12-14 03:18:09.939553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.896 [2024-12-14 03:18:09.939566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.896 [2024-12-14 03:18:09.939573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.896 [2024-12-14 03:18:09.939579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.896 [2024-12-14 03:18:09.939594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.896 qpair failed and we were unable to recover it. 00:36:54.896 [2024-12-14 03:18:09.949561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.896 [2024-12-14 03:18:09.949626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.896 [2024-12-14 03:18:09.949639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.896 [2024-12-14 03:18:09.949646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.896 [2024-12-14 03:18:09.949652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.896 [2024-12-14 03:18:09.949669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.896 qpair failed and we were unable to recover it. 00:36:54.896 [2024-12-14 03:18:09.959562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.896 [2024-12-14 03:18:09.959618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.896 [2024-12-14 03:18:09.959630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.896 [2024-12-14 03:18:09.959637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.896 [2024-12-14 03:18:09.959643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.896 [2024-12-14 03:18:09.959658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.896 qpair failed and we were unable to recover it. 00:36:54.896 [2024-12-14 03:18:09.969582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.896 [2024-12-14 03:18:09.969635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.896 [2024-12-14 03:18:09.969648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.896 [2024-12-14 03:18:09.969654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.896 [2024-12-14 03:18:09.969661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.896 [2024-12-14 03:18:09.969679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.896 qpair failed and we were unable to recover it. 00:36:54.896 [2024-12-14 03:18:09.979617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.896 [2024-12-14 03:18:09.979672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.896 [2024-12-14 03:18:09.979684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.896 [2024-12-14 03:18:09.979691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.896 [2024-12-14 03:18:09.979697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.896 [2024-12-14 03:18:09.979712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.896 qpair failed and we were unable to recover it. 00:36:54.896 [2024-12-14 03:18:09.989675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.896 [2024-12-14 03:18:09.989789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.896 [2024-12-14 03:18:09.989802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.896 [2024-12-14 03:18:09.989809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.896 [2024-12-14 03:18:09.989815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.896 [2024-12-14 03:18:09.989830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.896 qpair failed and we were unable to recover it. 00:36:54.896 [2024-12-14 03:18:09.999698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.896 [2024-12-14 03:18:09.999753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.896 [2024-12-14 03:18:09.999766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.896 [2024-12-14 03:18:09.999773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.896 [2024-12-14 03:18:09.999779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.896 [2024-12-14 03:18:09.999796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.896 qpair failed and we were unable to recover it. 00:36:54.896 [2024-12-14 03:18:10.009699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.896 [2024-12-14 03:18:10.009760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.896 [2024-12-14 03:18:10.009774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.896 [2024-12-14 03:18:10.009780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.897 [2024-12-14 03:18:10.009787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.897 [2024-12-14 03:18:10.009802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.897 qpair failed and we were unable to recover it. 00:36:54.897 [2024-12-14 03:18:10.019671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.897 [2024-12-14 03:18:10.019725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.897 [2024-12-14 03:18:10.019739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.897 [2024-12-14 03:18:10.019746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.897 [2024-12-14 03:18:10.019753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:54.897 [2024-12-14 03:18:10.019769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:54.897 qpair failed and we were unable to recover it. 00:36:55.156 [2024-12-14 03:18:10.029773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.157 [2024-12-14 03:18:10.029830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.157 [2024-12-14 03:18:10.029843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.157 [2024-12-14 03:18:10.029850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.157 [2024-12-14 03:18:10.029857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.157 [2024-12-14 03:18:10.029872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.157 qpair failed and we were unable to recover it. 00:36:55.157 [2024-12-14 03:18:10.039797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.157 [2024-12-14 03:18:10.039854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.157 [2024-12-14 03:18:10.039868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.157 [2024-12-14 03:18:10.039875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.157 [2024-12-14 03:18:10.039882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.157 [2024-12-14 03:18:10.039897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.157 qpair failed and we were unable to recover it. 00:36:55.157 [2024-12-14 03:18:10.049808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.157 [2024-12-14 03:18:10.049864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.157 [2024-12-14 03:18:10.049878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.157 [2024-12-14 03:18:10.049885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.157 [2024-12-14 03:18:10.049891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.157 [2024-12-14 03:18:10.049906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.157 qpair failed and we were unable to recover it. 00:36:55.157 [2024-12-14 03:18:10.059784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.157 [2024-12-14 03:18:10.059859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.157 [2024-12-14 03:18:10.059876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.157 [2024-12-14 03:18:10.059883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.157 [2024-12-14 03:18:10.059890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.157 [2024-12-14 03:18:10.059905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.157 qpair failed and we were unable to recover it. 00:36:55.157 [2024-12-14 03:18:10.069875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.157 [2024-12-14 03:18:10.069928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.157 [2024-12-14 03:18:10.069941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.157 [2024-12-14 03:18:10.069947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.157 [2024-12-14 03:18:10.069953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.157 [2024-12-14 03:18:10.069969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.157 qpair failed and we were unable to recover it. 00:36:55.157 [2024-12-14 03:18:10.079937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.157 [2024-12-14 03:18:10.079990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.157 [2024-12-14 03:18:10.080003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.157 [2024-12-14 03:18:10.080009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.157 [2024-12-14 03:18:10.080017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.157 [2024-12-14 03:18:10.080032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.157 qpair failed and we were unable to recover it. 00:36:55.157 [2024-12-14 03:18:10.089930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.157 [2024-12-14 03:18:10.090031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.157 [2024-12-14 03:18:10.090044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.157 [2024-12-14 03:18:10.090051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.157 [2024-12-14 03:18:10.090058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.157 [2024-12-14 03:18:10.090073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.157 qpair failed and we were unable to recover it. 00:36:55.157 [2024-12-14 03:18:10.099975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.157 [2024-12-14 03:18:10.100093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.157 [2024-12-14 03:18:10.100111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.157 [2024-12-14 03:18:10.100120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.157 [2024-12-14 03:18:10.100134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.157 [2024-12-14 03:18:10.100151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.157 qpair failed and we were unable to recover it. 00:36:55.157 [2024-12-14 03:18:10.109923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.157 [2024-12-14 03:18:10.109980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.157 [2024-12-14 03:18:10.109993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.157 [2024-12-14 03:18:10.109999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.157 [2024-12-14 03:18:10.110006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.157 [2024-12-14 03:18:10.110020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.157 qpair failed and we were unable to recover it. 00:36:55.157 [2024-12-14 03:18:10.120027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.157 [2024-12-14 03:18:10.120083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.157 [2024-12-14 03:18:10.120096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.157 [2024-12-14 03:18:10.120103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.157 [2024-12-14 03:18:10.120109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.157 [2024-12-14 03:18:10.120125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.157 qpair failed and we were unable to recover it. 00:36:55.157 [2024-12-14 03:18:10.129973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.157 [2024-12-14 03:18:10.130053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.157 [2024-12-14 03:18:10.130066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.157 [2024-12-14 03:18:10.130073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.157 [2024-12-14 03:18:10.130079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.157 [2024-12-14 03:18:10.130094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.157 qpair failed and we were unable to recover it. 00:36:55.157 [2024-12-14 03:18:10.140088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.157 [2024-12-14 03:18:10.140176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.157 [2024-12-14 03:18:10.140189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.157 [2024-12-14 03:18:10.140196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.157 [2024-12-14 03:18:10.140202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.157 [2024-12-14 03:18:10.140217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.157 qpair failed and we were unable to recover it. 00:36:55.157 [2024-12-14 03:18:10.150109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.157 [2024-12-14 03:18:10.150163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.157 [2024-12-14 03:18:10.150176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.157 [2024-12-14 03:18:10.150183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.157 [2024-12-14 03:18:10.150189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.157 [2024-12-14 03:18:10.150205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.157 qpair failed and we were unable to recover it. 00:36:55.157 [2024-12-14 03:18:10.160130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.158 [2024-12-14 03:18:10.160186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.158 [2024-12-14 03:18:10.160199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.158 [2024-12-14 03:18:10.160205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.158 [2024-12-14 03:18:10.160212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.158 [2024-12-14 03:18:10.160227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.158 qpair failed and we were unable to recover it. 00:36:55.158 [2024-12-14 03:18:10.170148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.158 [2024-12-14 03:18:10.170201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.158 [2024-12-14 03:18:10.170215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.158 [2024-12-14 03:18:10.170221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.158 [2024-12-14 03:18:10.170228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.158 [2024-12-14 03:18:10.170243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.158 qpair failed and we were unable to recover it. 00:36:55.158 [2024-12-14 03:18:10.180164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.158 [2024-12-14 03:18:10.180215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.158 [2024-12-14 03:18:10.180228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.158 [2024-12-14 03:18:10.180235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.158 [2024-12-14 03:18:10.180241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.158 [2024-12-14 03:18:10.180256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.158 qpair failed and we were unable to recover it. 00:36:55.158 [2024-12-14 03:18:10.190225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.158 [2024-12-14 03:18:10.190290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.158 [2024-12-14 03:18:10.190307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.158 [2024-12-14 03:18:10.190325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.158 [2024-12-14 03:18:10.190331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.158 [2024-12-14 03:18:10.190347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.158 qpair failed and we were unable to recover it. 00:36:55.158 [2024-12-14 03:18:10.200172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.158 [2024-12-14 03:18:10.200228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.158 [2024-12-14 03:18:10.200240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.158 [2024-12-14 03:18:10.200247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.158 [2024-12-14 03:18:10.200253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.158 [2024-12-14 03:18:10.200268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.158 qpair failed and we were unable to recover it. 00:36:55.158 [2024-12-14 03:18:10.210269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.158 [2024-12-14 03:18:10.210321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.158 [2024-12-14 03:18:10.210335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.158 [2024-12-14 03:18:10.210342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.158 [2024-12-14 03:18:10.210348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.158 [2024-12-14 03:18:10.210364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.158 qpair failed and we were unable to recover it. 00:36:55.158 [2024-12-14 03:18:10.220269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.158 [2024-12-14 03:18:10.220338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.158 [2024-12-14 03:18:10.220351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.158 [2024-12-14 03:18:10.220358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.158 [2024-12-14 03:18:10.220364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.158 [2024-12-14 03:18:10.220380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.158 qpair failed and we were unable to recover it. 00:36:55.158 [2024-12-14 03:18:10.230334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.158 [2024-12-14 03:18:10.230390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.158 [2024-12-14 03:18:10.230403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.158 [2024-12-14 03:18:10.230410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.158 [2024-12-14 03:18:10.230419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.158 [2024-12-14 03:18:10.230434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.158 qpair failed and we were unable to recover it. 00:36:55.158 [2024-12-14 03:18:10.240347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.158 [2024-12-14 03:18:10.240407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.158 [2024-12-14 03:18:10.240420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.158 [2024-12-14 03:18:10.240427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.158 [2024-12-14 03:18:10.240433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.158 [2024-12-14 03:18:10.240448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.158 qpair failed and we were unable to recover it. 00:36:55.158 [2024-12-14 03:18:10.250371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.158 [2024-12-14 03:18:10.250429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.158 [2024-12-14 03:18:10.250442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.158 [2024-12-14 03:18:10.250449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.158 [2024-12-14 03:18:10.250455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.158 [2024-12-14 03:18:10.250471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.158 qpair failed and we were unable to recover it. 00:36:55.158 [2024-12-14 03:18:10.260430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.158 [2024-12-14 03:18:10.260483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.158 [2024-12-14 03:18:10.260495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.158 [2024-12-14 03:18:10.260502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.158 [2024-12-14 03:18:10.260508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.158 [2024-12-14 03:18:10.260523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.158 qpair failed and we were unable to recover it. 00:36:55.158 [2024-12-14 03:18:10.270480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.158 [2024-12-14 03:18:10.270557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.158 [2024-12-14 03:18:10.270571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.158 [2024-12-14 03:18:10.270578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.158 [2024-12-14 03:18:10.270584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.158 [2024-12-14 03:18:10.270600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.158 qpair failed and we were unable to recover it. 00:36:55.158 [2024-12-14 03:18:10.280446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.158 [2024-12-14 03:18:10.280549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.158 [2024-12-14 03:18:10.280561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.158 [2024-12-14 03:18:10.280568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.158 [2024-12-14 03:18:10.280574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.158 [2024-12-14 03:18:10.280589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.158 qpair failed and we were unable to recover it. 00:36:55.418 [2024-12-14 03:18:10.290460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.418 [2024-12-14 03:18:10.290514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.418 [2024-12-14 03:18:10.290526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.418 [2024-12-14 03:18:10.290533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.418 [2024-12-14 03:18:10.290539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.418 [2024-12-14 03:18:10.290554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.418 qpair failed and we were unable to recover it. 00:36:55.418 [2024-12-14 03:18:10.300550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.418 [2024-12-14 03:18:10.300614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.418 [2024-12-14 03:18:10.300627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.418 [2024-12-14 03:18:10.300635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.418 [2024-12-14 03:18:10.300641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.418 [2024-12-14 03:18:10.300655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.418 qpair failed and we were unable to recover it. 00:36:55.418 [2024-12-14 03:18:10.310546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.418 [2024-12-14 03:18:10.310600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.418 [2024-12-14 03:18:10.310613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.418 [2024-12-14 03:18:10.310620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.418 [2024-12-14 03:18:10.310627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.418 [2024-12-14 03:18:10.310641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.418 qpair failed and we were unable to recover it. 00:36:55.418 [2024-12-14 03:18:10.320575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.418 [2024-12-14 03:18:10.320639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.418 [2024-12-14 03:18:10.320655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.419 [2024-12-14 03:18:10.320662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.419 [2024-12-14 03:18:10.320668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.419 [2024-12-14 03:18:10.320683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.419 qpair failed and we were unable to recover it. 00:36:55.419 [2024-12-14 03:18:10.330615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.419 [2024-12-14 03:18:10.330672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.419 [2024-12-14 03:18:10.330685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.419 [2024-12-14 03:18:10.330692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.419 [2024-12-14 03:18:10.330698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.419 [2024-12-14 03:18:10.330713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.419 qpair failed and we were unable to recover it. 00:36:55.419 [2024-12-14 03:18:10.340632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.419 [2024-12-14 03:18:10.340704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.419 [2024-12-14 03:18:10.340718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.419 [2024-12-14 03:18:10.340725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.419 [2024-12-14 03:18:10.340731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.419 [2024-12-14 03:18:10.340746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.419 qpair failed and we were unable to recover it. 00:36:55.419 [2024-12-14 03:18:10.350721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.419 [2024-12-14 03:18:10.350775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.419 [2024-12-14 03:18:10.350788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.419 [2024-12-14 03:18:10.350795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.419 [2024-12-14 03:18:10.350801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.419 [2024-12-14 03:18:10.350817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.419 qpair failed and we were unable to recover it. 00:36:55.419 [2024-12-14 03:18:10.360743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.419 [2024-12-14 03:18:10.360805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.419 [2024-12-14 03:18:10.360817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.419 [2024-12-14 03:18:10.360828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.419 [2024-12-14 03:18:10.360834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.419 [2024-12-14 03:18:10.360850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.419 qpair failed and we were unable to recover it. 00:36:55.419 [2024-12-14 03:18:10.370759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.419 [2024-12-14 03:18:10.370824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.419 [2024-12-14 03:18:10.370837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.419 [2024-12-14 03:18:10.370844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.419 [2024-12-14 03:18:10.370850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.419 [2024-12-14 03:18:10.370865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.419 qpair failed and we were unable to recover it. 00:36:55.419 [2024-12-14 03:18:10.380752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.419 [2024-12-14 03:18:10.380817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.419 [2024-12-14 03:18:10.380830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.419 [2024-12-14 03:18:10.380837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.419 [2024-12-14 03:18:10.380843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.419 [2024-12-14 03:18:10.380859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.419 qpair failed and we were unable to recover it. 00:36:55.419 [2024-12-14 03:18:10.390740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.419 [2024-12-14 03:18:10.390798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.419 [2024-12-14 03:18:10.390811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.419 [2024-12-14 03:18:10.390817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.419 [2024-12-14 03:18:10.390824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.419 [2024-12-14 03:18:10.390839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.419 qpair failed and we were unable to recover it. 00:36:55.419 [2024-12-14 03:18:10.400846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.419 [2024-12-14 03:18:10.400905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.419 [2024-12-14 03:18:10.400918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.419 [2024-12-14 03:18:10.400924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.419 [2024-12-14 03:18:10.400931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.419 [2024-12-14 03:18:10.400946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.419 qpair failed and we were unable to recover it. 00:36:55.419 [2024-12-14 03:18:10.410843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.419 [2024-12-14 03:18:10.410893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.419 [2024-12-14 03:18:10.410906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.419 [2024-12-14 03:18:10.410913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.419 [2024-12-14 03:18:10.410919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.419 [2024-12-14 03:18:10.410934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.419 qpair failed and we were unable to recover it. 00:36:55.419 [2024-12-14 03:18:10.420824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.419 [2024-12-14 03:18:10.420913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.419 [2024-12-14 03:18:10.420926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.419 [2024-12-14 03:18:10.420933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.419 [2024-12-14 03:18:10.420939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.419 [2024-12-14 03:18:10.420954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.419 qpair failed and we were unable to recover it. 00:36:55.419 [2024-12-14 03:18:10.430932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.419 [2024-12-14 03:18:10.431032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.419 [2024-12-14 03:18:10.431045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.419 [2024-12-14 03:18:10.431052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.419 [2024-12-14 03:18:10.431058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.419 [2024-12-14 03:18:10.431073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.419 qpair failed and we were unable to recover it. 00:36:55.419 [2024-12-14 03:18:10.440925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.419 [2024-12-14 03:18:10.440981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.419 [2024-12-14 03:18:10.440993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.419 [2024-12-14 03:18:10.441000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.419 [2024-12-14 03:18:10.441006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.419 [2024-12-14 03:18:10.441020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.419 qpair failed and we were unable to recover it. 00:36:55.419 [2024-12-14 03:18:10.450964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.419 [2024-12-14 03:18:10.451037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.419 [2024-12-14 03:18:10.451050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.419 [2024-12-14 03:18:10.451056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.419 [2024-12-14 03:18:10.451062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.420 [2024-12-14 03:18:10.451077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.420 qpair failed and we were unable to recover it. 00:36:55.420 [2024-12-14 03:18:10.460911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.420 [2024-12-14 03:18:10.460965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.420 [2024-12-14 03:18:10.460978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.420 [2024-12-14 03:18:10.460984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.420 [2024-12-14 03:18:10.460991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.420 [2024-12-14 03:18:10.461005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.420 qpair failed and we were unable to recover it. 00:36:55.420 [2024-12-14 03:18:10.471041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.420 [2024-12-14 03:18:10.471102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.420 [2024-12-14 03:18:10.471115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.420 [2024-12-14 03:18:10.471122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.420 [2024-12-14 03:18:10.471129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.420 [2024-12-14 03:18:10.471144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.420 qpair failed and we were unable to recover it. 00:36:55.420 [2024-12-14 03:18:10.481057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.420 [2024-12-14 03:18:10.481116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.420 [2024-12-14 03:18:10.481128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.420 [2024-12-14 03:18:10.481136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.420 [2024-12-14 03:18:10.481142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.420 [2024-12-14 03:18:10.481158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.420 qpair failed and we were unable to recover it. 00:36:55.420 [2024-12-14 03:18:10.491098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.420 [2024-12-14 03:18:10.491156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.420 [2024-12-14 03:18:10.491169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.420 [2024-12-14 03:18:10.491179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.420 [2024-12-14 03:18:10.491186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.420 [2024-12-14 03:18:10.491201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.420 qpair failed and we were unable to recover it. 00:36:55.420 [2024-12-14 03:18:10.501117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.420 [2024-12-14 03:18:10.501174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.420 [2024-12-14 03:18:10.501189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.420 [2024-12-14 03:18:10.501196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.420 [2024-12-14 03:18:10.501203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.420 [2024-12-14 03:18:10.501219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.420 qpair failed and we were unable to recover it. 00:36:55.420 [2024-12-14 03:18:10.511189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.420 [2024-12-14 03:18:10.511245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.420 [2024-12-14 03:18:10.511258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.420 [2024-12-14 03:18:10.511265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.420 [2024-12-14 03:18:10.511272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.420 [2024-12-14 03:18:10.511287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.420 qpair failed and we were unable to recover it. 00:36:55.420 [2024-12-14 03:18:10.521178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.420 [2024-12-14 03:18:10.521231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.420 [2024-12-14 03:18:10.521244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.420 [2024-12-14 03:18:10.521251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.420 [2024-12-14 03:18:10.521258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.420 [2024-12-14 03:18:10.521273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.420 qpair failed and we were unable to recover it. 00:36:55.420 [2024-12-14 03:18:10.531208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.420 [2024-12-14 03:18:10.531262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.420 [2024-12-14 03:18:10.531275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.420 [2024-12-14 03:18:10.531282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.420 [2024-12-14 03:18:10.531288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.420 [2024-12-14 03:18:10.531307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.420 qpair failed and we were unable to recover it. 00:36:55.420 [2024-12-14 03:18:10.541231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.420 [2024-12-14 03:18:10.541280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.420 [2024-12-14 03:18:10.541293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.420 [2024-12-14 03:18:10.541300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.420 [2024-12-14 03:18:10.541307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.420 [2024-12-14 03:18:10.541328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.420 qpair failed and we were unable to recover it. 00:36:55.680 [2024-12-14 03:18:10.551321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.680 [2024-12-14 03:18:10.551378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.680 [2024-12-14 03:18:10.551391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.680 [2024-12-14 03:18:10.551398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.680 [2024-12-14 03:18:10.551404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.680 [2024-12-14 03:18:10.551420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.680 qpair failed and we were unable to recover it. 00:36:55.680 [2024-12-14 03:18:10.561297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.680 [2024-12-14 03:18:10.561357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.680 [2024-12-14 03:18:10.561370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.680 [2024-12-14 03:18:10.561377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.680 [2024-12-14 03:18:10.561383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.680 [2024-12-14 03:18:10.561399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.680 qpair failed and we were unable to recover it. 00:36:55.680 [2024-12-14 03:18:10.571370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.680 [2024-12-14 03:18:10.571430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.680 [2024-12-14 03:18:10.571443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.680 [2024-12-14 03:18:10.571450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.680 [2024-12-14 03:18:10.571456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.681 [2024-12-14 03:18:10.571471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.681 qpair failed and we were unable to recover it. 00:36:55.681 [2024-12-14 03:18:10.581287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.681 [2024-12-14 03:18:10.581347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.681 [2024-12-14 03:18:10.581360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.681 [2024-12-14 03:18:10.581367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.681 [2024-12-14 03:18:10.581373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.681 [2024-12-14 03:18:10.581388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.681 qpair failed and we were unable to recover it. 00:36:55.681 [2024-12-14 03:18:10.591393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.681 [2024-12-14 03:18:10.591445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.681 [2024-12-14 03:18:10.591458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.681 [2024-12-14 03:18:10.591464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.681 [2024-12-14 03:18:10.591471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.681 [2024-12-14 03:18:10.591486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.681 qpair failed and we were unable to recover it. 00:36:55.681 [2024-12-14 03:18:10.601336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.681 [2024-12-14 03:18:10.601423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.681 [2024-12-14 03:18:10.601436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.681 [2024-12-14 03:18:10.601443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.681 [2024-12-14 03:18:10.601449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.681 [2024-12-14 03:18:10.601464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.681 qpair failed and we were unable to recover it. 00:36:55.681 [2024-12-14 03:18:10.611436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.681 [2024-12-14 03:18:10.611487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.681 [2024-12-14 03:18:10.611500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.681 [2024-12-14 03:18:10.611507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.681 [2024-12-14 03:18:10.611513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.681 [2024-12-14 03:18:10.611528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.681 qpair failed and we were unable to recover it. 00:36:55.681 [2024-12-14 03:18:10.621442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.681 [2024-12-14 03:18:10.621497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.681 [2024-12-14 03:18:10.621512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.681 [2024-12-14 03:18:10.621520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.681 [2024-12-14 03:18:10.621526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.681 [2024-12-14 03:18:10.621541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.681 qpair failed and we were unable to recover it. 00:36:55.681 [2024-12-14 03:18:10.631428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.681 [2024-12-14 03:18:10.631527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.681 [2024-12-14 03:18:10.631540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.681 [2024-12-14 03:18:10.631546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.681 [2024-12-14 03:18:10.631552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.681 [2024-12-14 03:18:10.631567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.681 qpair failed and we were unable to recover it. 00:36:55.681 [2024-12-14 03:18:10.641539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.681 [2024-12-14 03:18:10.641596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.681 [2024-12-14 03:18:10.641609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.681 [2024-12-14 03:18:10.641615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.681 [2024-12-14 03:18:10.641622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.681 [2024-12-14 03:18:10.641637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.681 qpair failed and we were unable to recover it. 00:36:55.681 [2024-12-14 03:18:10.651536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.681 [2024-12-14 03:18:10.651622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.681 [2024-12-14 03:18:10.651635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.681 [2024-12-14 03:18:10.651641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.681 [2024-12-14 03:18:10.651647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.681 [2024-12-14 03:18:10.651663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.681 qpair failed and we were unable to recover it. 00:36:55.681 [2024-12-14 03:18:10.661483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.681 [2024-12-14 03:18:10.661551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.681 [2024-12-14 03:18:10.661565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.681 [2024-12-14 03:18:10.661574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.681 [2024-12-14 03:18:10.661584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.681 [2024-12-14 03:18:10.661601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.681 qpair failed and we were unable to recover it. 00:36:55.681 [2024-12-14 03:18:10.671591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.681 [2024-12-14 03:18:10.671646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.681 [2024-12-14 03:18:10.671659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.681 [2024-12-14 03:18:10.671666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.681 [2024-12-14 03:18:10.671672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.681 [2024-12-14 03:18:10.671687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.681 qpair failed and we were unable to recover it. 00:36:55.681 [2024-12-14 03:18:10.681619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.681 [2024-12-14 03:18:10.681671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.681 [2024-12-14 03:18:10.681684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.681 [2024-12-14 03:18:10.681691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.681 [2024-12-14 03:18:10.681697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.681 [2024-12-14 03:18:10.681713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.681 qpair failed and we were unable to recover it. 00:36:55.681 [2024-12-14 03:18:10.691579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.681 [2024-12-14 03:18:10.691630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.681 [2024-12-14 03:18:10.691643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.681 [2024-12-14 03:18:10.691649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.681 [2024-12-14 03:18:10.691656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.681 [2024-12-14 03:18:10.691671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.681 qpair failed and we were unable to recover it. 00:36:55.681 [2024-12-14 03:18:10.701706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.681 [2024-12-14 03:18:10.701757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.681 [2024-12-14 03:18:10.701770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.681 [2024-12-14 03:18:10.701777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.681 [2024-12-14 03:18:10.701783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.681 [2024-12-14 03:18:10.701798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.681 qpair failed and we were unable to recover it. 00:36:55.681 [2024-12-14 03:18:10.711711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.682 [2024-12-14 03:18:10.711770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.682 [2024-12-14 03:18:10.711784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.682 [2024-12-14 03:18:10.711792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.682 [2024-12-14 03:18:10.711798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.682 [2024-12-14 03:18:10.711813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.682 qpair failed and we were unable to recover it. 00:36:55.682 [2024-12-14 03:18:10.721672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.682 [2024-12-14 03:18:10.721730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.682 [2024-12-14 03:18:10.721743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.682 [2024-12-14 03:18:10.721750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.682 [2024-12-14 03:18:10.721756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.682 [2024-12-14 03:18:10.721772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.682 qpair failed and we were unable to recover it. 00:36:55.682 [2024-12-14 03:18:10.731777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.682 [2024-12-14 03:18:10.731858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.682 [2024-12-14 03:18:10.731871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.682 [2024-12-14 03:18:10.731878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.682 [2024-12-14 03:18:10.731884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.682 [2024-12-14 03:18:10.731899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.682 qpair failed and we were unable to recover it. 00:36:55.682 [2024-12-14 03:18:10.741789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.682 [2024-12-14 03:18:10.741849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.682 [2024-12-14 03:18:10.741862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.682 [2024-12-14 03:18:10.741869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.682 [2024-12-14 03:18:10.741876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.682 [2024-12-14 03:18:10.741890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.682 qpair failed and we were unable to recover it. 00:36:55.682 [2024-12-14 03:18:10.751845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.682 [2024-12-14 03:18:10.751898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.682 [2024-12-14 03:18:10.751913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.682 [2024-12-14 03:18:10.751920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.682 [2024-12-14 03:18:10.751927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.682 [2024-12-14 03:18:10.751942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.682 qpair failed and we were unable to recover it. 00:36:55.682 [2024-12-14 03:18:10.761906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.682 [2024-12-14 03:18:10.762007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.682 [2024-12-14 03:18:10.762019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.682 [2024-12-14 03:18:10.762025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.682 [2024-12-14 03:18:10.762032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.682 [2024-12-14 03:18:10.762046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.682 qpair failed and we were unable to recover it. 00:36:55.682 [2024-12-14 03:18:10.771890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.682 [2024-12-14 03:18:10.771947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.682 [2024-12-14 03:18:10.771960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.682 [2024-12-14 03:18:10.771967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.682 [2024-12-14 03:18:10.771973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.682 [2024-12-14 03:18:10.771988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.682 qpair failed and we were unable to recover it. 00:36:55.682 [2024-12-14 03:18:10.781887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.682 [2024-12-14 03:18:10.781941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.682 [2024-12-14 03:18:10.781954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.682 [2024-12-14 03:18:10.781960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.682 [2024-12-14 03:18:10.781967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.682 [2024-12-14 03:18:10.781981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.682 qpair failed and we were unable to recover it. 00:36:55.682 [2024-12-14 03:18:10.791918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.682 [2024-12-14 03:18:10.791988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.682 [2024-12-14 03:18:10.792001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.682 [2024-12-14 03:18:10.792008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.682 [2024-12-14 03:18:10.792019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.682 [2024-12-14 03:18:10.792034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.682 qpair failed and we were unable to recover it. 00:36:55.682 [2024-12-14 03:18:10.801933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.682 [2024-12-14 03:18:10.801995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.682 [2024-12-14 03:18:10.802008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.682 [2024-12-14 03:18:10.802015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.682 [2024-12-14 03:18:10.802021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.682 [2024-12-14 03:18:10.802036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.682 qpair failed and we were unable to recover it. 00:36:55.943 [2024-12-14 03:18:10.812000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.943 [2024-12-14 03:18:10.812052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.943 [2024-12-14 03:18:10.812064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.943 [2024-12-14 03:18:10.812071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.943 [2024-12-14 03:18:10.812077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.943 [2024-12-14 03:18:10.812092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.943 qpair failed and we were unable to recover it. 00:36:55.943 [2024-12-14 03:18:10.822049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.943 [2024-12-14 03:18:10.822113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.943 [2024-12-14 03:18:10.822125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.943 [2024-12-14 03:18:10.822133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.943 [2024-12-14 03:18:10.822139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.943 [2024-12-14 03:18:10.822154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.943 qpair failed and we were unable to recover it. 00:36:55.943 [2024-12-14 03:18:10.832105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.943 [2024-12-14 03:18:10.832212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.943 [2024-12-14 03:18:10.832225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.943 [2024-12-14 03:18:10.832232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.943 [2024-12-14 03:18:10.832239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.943 [2024-12-14 03:18:10.832254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.943 qpair failed and we were unable to recover it. 00:36:55.943 [2024-12-14 03:18:10.842083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.943 [2024-12-14 03:18:10.842149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.943 [2024-12-14 03:18:10.842162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.943 [2024-12-14 03:18:10.842168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.943 [2024-12-14 03:18:10.842175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.943 [2024-12-14 03:18:10.842190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.943 qpair failed and we were unable to recover it. 00:36:55.943 [2024-12-14 03:18:10.852119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.943 [2024-12-14 03:18:10.852174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.943 [2024-12-14 03:18:10.852186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.943 [2024-12-14 03:18:10.852193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.943 [2024-12-14 03:18:10.852199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.943 [2024-12-14 03:18:10.852214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.943 qpair failed and we were unable to recover it. 00:36:55.943 [2024-12-14 03:18:10.862133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.943 [2024-12-14 03:18:10.862183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.943 [2024-12-14 03:18:10.862197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.943 [2024-12-14 03:18:10.862203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.943 [2024-12-14 03:18:10.862210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.943 [2024-12-14 03:18:10.862225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.943 qpair failed and we were unable to recover it. 00:36:55.943 [2024-12-14 03:18:10.872172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.943 [2024-12-14 03:18:10.872227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.943 [2024-12-14 03:18:10.872240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.943 [2024-12-14 03:18:10.872247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.943 [2024-12-14 03:18:10.872253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.943 [2024-12-14 03:18:10.872269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.943 qpair failed and we were unable to recover it. 00:36:55.943 [2024-12-14 03:18:10.882159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.943 [2024-12-14 03:18:10.882259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.943 [2024-12-14 03:18:10.882275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.943 [2024-12-14 03:18:10.882282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.943 [2024-12-14 03:18:10.882288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.943 [2024-12-14 03:18:10.882303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.943 qpair failed and we were unable to recover it. 00:36:55.943 [2024-12-14 03:18:10.892222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.943 [2024-12-14 03:18:10.892276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.943 [2024-12-14 03:18:10.892289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.943 [2024-12-14 03:18:10.892296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.943 [2024-12-14 03:18:10.892303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.943 [2024-12-14 03:18:10.892321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.943 qpair failed and we were unable to recover it. 00:36:55.943 [2024-12-14 03:18:10.902250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.943 [2024-12-14 03:18:10.902341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.943 [2024-12-14 03:18:10.902354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.943 [2024-12-14 03:18:10.902361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.943 [2024-12-14 03:18:10.902367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.943 [2024-12-14 03:18:10.902382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.943 qpair failed and we were unable to recover it. 00:36:55.943 [2024-12-14 03:18:10.912285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.943 [2024-12-14 03:18:10.912344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.943 [2024-12-14 03:18:10.912357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.943 [2024-12-14 03:18:10.912364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.943 [2024-12-14 03:18:10.912371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.943 [2024-12-14 03:18:10.912386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.944 qpair failed and we were unable to recover it. 00:36:55.944 [2024-12-14 03:18:10.922345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.944 [2024-12-14 03:18:10.922400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.944 [2024-12-14 03:18:10.922412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.944 [2024-12-14 03:18:10.922422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.944 [2024-12-14 03:18:10.922429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.944 [2024-12-14 03:18:10.922443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.944 qpair failed and we were unable to recover it. 00:36:55.944 [2024-12-14 03:18:10.932327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.944 [2024-12-14 03:18:10.932382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.944 [2024-12-14 03:18:10.932395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.944 [2024-12-14 03:18:10.932401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.944 [2024-12-14 03:18:10.932408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.944 [2024-12-14 03:18:10.932423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.944 qpair failed and we were unable to recover it. 00:36:55.944 [2024-12-14 03:18:10.942355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.944 [2024-12-14 03:18:10.942410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.944 [2024-12-14 03:18:10.942423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.944 [2024-12-14 03:18:10.942430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.944 [2024-12-14 03:18:10.942437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.944 [2024-12-14 03:18:10.942452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.944 qpair failed and we were unable to recover it. 00:36:55.944 [2024-12-14 03:18:10.952417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.944 [2024-12-14 03:18:10.952471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.944 [2024-12-14 03:18:10.952484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.944 [2024-12-14 03:18:10.952491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.944 [2024-12-14 03:18:10.952497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.944 [2024-12-14 03:18:10.952512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.944 qpair failed and we were unable to recover it. 00:36:55.944 [2024-12-14 03:18:10.962462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.944 [2024-12-14 03:18:10.962553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.944 [2024-12-14 03:18:10.962566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.944 [2024-12-14 03:18:10.962573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.944 [2024-12-14 03:18:10.962579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.944 [2024-12-14 03:18:10.962597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.944 qpair failed and we were unable to recover it. 00:36:55.944 [2024-12-14 03:18:10.972449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.944 [2024-12-14 03:18:10.972504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.944 [2024-12-14 03:18:10.972516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.944 [2024-12-14 03:18:10.972523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.944 [2024-12-14 03:18:10.972530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.944 [2024-12-14 03:18:10.972545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.944 qpair failed and we were unable to recover it. 00:36:55.944 [2024-12-14 03:18:10.982470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.944 [2024-12-14 03:18:10.982526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.944 [2024-12-14 03:18:10.982539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.944 [2024-12-14 03:18:10.982545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.944 [2024-12-14 03:18:10.982552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.944 [2024-12-14 03:18:10.982567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.944 qpair failed and we were unable to recover it. 00:36:55.944 [2024-12-14 03:18:10.992554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.944 [2024-12-14 03:18:10.992659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.944 [2024-12-14 03:18:10.992673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.944 [2024-12-14 03:18:10.992680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.944 [2024-12-14 03:18:10.992687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.944 [2024-12-14 03:18:10.992703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.944 qpair failed and we were unable to recover it. 00:36:55.944 [2024-12-14 03:18:11.002583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.944 [2024-12-14 03:18:11.002660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.944 [2024-12-14 03:18:11.002673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.944 [2024-12-14 03:18:11.002680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.944 [2024-12-14 03:18:11.002686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.944 [2024-12-14 03:18:11.002701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.944 qpair failed and we were unable to recover it. 00:36:55.944 [2024-12-14 03:18:11.012567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.944 [2024-12-14 03:18:11.012623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.944 [2024-12-14 03:18:11.012636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.944 [2024-12-14 03:18:11.012643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.944 [2024-12-14 03:18:11.012650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.944 [2024-12-14 03:18:11.012665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.944 qpair failed and we were unable to recover it. 00:36:55.944 [2024-12-14 03:18:11.022638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.944 [2024-12-14 03:18:11.022695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.944 [2024-12-14 03:18:11.022708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.944 [2024-12-14 03:18:11.022715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.944 [2024-12-14 03:18:11.022722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.944 [2024-12-14 03:18:11.022737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.944 qpair failed and we were unable to recover it. 00:36:55.944 [2024-12-14 03:18:11.032583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.944 [2024-12-14 03:18:11.032677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.944 [2024-12-14 03:18:11.032690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.944 [2024-12-14 03:18:11.032697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.944 [2024-12-14 03:18:11.032703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.944 [2024-12-14 03:18:11.032718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.944 qpair failed and we were unable to recover it. 00:36:55.944 [2024-12-14 03:18:11.042668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.944 [2024-12-14 03:18:11.042724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.944 [2024-12-14 03:18:11.042737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.944 [2024-12-14 03:18:11.042744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.944 [2024-12-14 03:18:11.042749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.944 [2024-12-14 03:18:11.042764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.944 qpair failed and we were unable to recover it. 00:36:55.944 [2024-12-14 03:18:11.052645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.944 [2024-12-14 03:18:11.052698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.945 [2024-12-14 03:18:11.052711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.945 [2024-12-14 03:18:11.052721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.945 [2024-12-14 03:18:11.052727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.945 [2024-12-14 03:18:11.052742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.945 qpair failed and we were unable to recover it. 00:36:55.945 [2024-12-14 03:18:11.062722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.945 [2024-12-14 03:18:11.062774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.945 [2024-12-14 03:18:11.062786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.945 [2024-12-14 03:18:11.062793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.945 [2024-12-14 03:18:11.062800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.945 [2024-12-14 03:18:11.062814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.945 qpair failed and we were unable to recover it. 00:36:55.945 [2024-12-14 03:18:11.072757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.945 [2024-12-14 03:18:11.072818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.945 [2024-12-14 03:18:11.072831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.945 [2024-12-14 03:18:11.072839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.945 [2024-12-14 03:18:11.072845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:55.945 [2024-12-14 03:18:11.072860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:55.945 qpair failed and we were unable to recover it. 00:36:56.205 [2024-12-14 03:18:11.082793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.205 [2024-12-14 03:18:11.082846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.205 [2024-12-14 03:18:11.082859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.205 [2024-12-14 03:18:11.082865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.205 [2024-12-14 03:18:11.082872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.205 [2024-12-14 03:18:11.082888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.205 qpair failed and we were unable to recover it. 00:36:56.205 [2024-12-14 03:18:11.092815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.205 [2024-12-14 03:18:11.092875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.205 [2024-12-14 03:18:11.092888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.205 [2024-12-14 03:18:11.092895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.205 [2024-12-14 03:18:11.092901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.205 [2024-12-14 03:18:11.092919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.205 qpair failed and we were unable to recover it. 00:36:56.205 [2024-12-14 03:18:11.102842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.205 [2024-12-14 03:18:11.102921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.205 [2024-12-14 03:18:11.102934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.205 [2024-12-14 03:18:11.102941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.205 [2024-12-14 03:18:11.102947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.205 [2024-12-14 03:18:11.102962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.205 qpair failed and we were unable to recover it. 00:36:56.205 [2024-12-14 03:18:11.112808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.205 [2024-12-14 03:18:11.112869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.205 [2024-12-14 03:18:11.112882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.205 [2024-12-14 03:18:11.112889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.205 [2024-12-14 03:18:11.112896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.205 [2024-12-14 03:18:11.112911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.205 qpair failed and we were unable to recover it. 00:36:56.205 [2024-12-14 03:18:11.122904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.205 [2024-12-14 03:18:11.122961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.205 [2024-12-14 03:18:11.122974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.205 [2024-12-14 03:18:11.122982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.205 [2024-12-14 03:18:11.122988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.205 [2024-12-14 03:18:11.123002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.205 qpair failed and we were unable to recover it. 00:36:56.205 [2024-12-14 03:18:11.132932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.205 [2024-12-14 03:18:11.133001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.205 [2024-12-14 03:18:11.133015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.205 [2024-12-14 03:18:11.133022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.205 [2024-12-14 03:18:11.133028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.205 [2024-12-14 03:18:11.133042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.205 qpair failed and we were unable to recover it. 00:36:56.205 [2024-12-14 03:18:11.142952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.205 [2024-12-14 03:18:11.143006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.205 [2024-12-14 03:18:11.143019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.205 [2024-12-14 03:18:11.143026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.205 [2024-12-14 03:18:11.143032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.205 [2024-12-14 03:18:11.143047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.205 qpair failed and we were unable to recover it. 00:36:56.205 [2024-12-14 03:18:11.152990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.205 [2024-12-14 03:18:11.153044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.205 [2024-12-14 03:18:11.153056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.205 [2024-12-14 03:18:11.153063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.205 [2024-12-14 03:18:11.153069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.205 [2024-12-14 03:18:11.153084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.205 qpair failed and we were unable to recover it. 00:36:56.205 [2024-12-14 03:18:11.162966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.205 [2024-12-14 03:18:11.163020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.205 [2024-12-14 03:18:11.163032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.205 [2024-12-14 03:18:11.163039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.205 [2024-12-14 03:18:11.163045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.205 [2024-12-14 03:18:11.163060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.205 qpair failed and we were unable to recover it. 00:36:56.205 [2024-12-14 03:18:11.173093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.205 [2024-12-14 03:18:11.173157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.205 [2024-12-14 03:18:11.173171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.205 [2024-12-14 03:18:11.173178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.205 [2024-12-14 03:18:11.173185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.205 [2024-12-14 03:18:11.173200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.205 qpair failed and we were unable to recover it. 00:36:56.205 [2024-12-14 03:18:11.183069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.205 [2024-12-14 03:18:11.183123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.205 [2024-12-14 03:18:11.183140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.205 [2024-12-14 03:18:11.183147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.205 [2024-12-14 03:18:11.183153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.205 [2024-12-14 03:18:11.183168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.205 qpair failed and we were unable to recover it. 00:36:56.205 [2024-12-14 03:18:11.193109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.205 [2024-12-14 03:18:11.193166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.205 [2024-12-14 03:18:11.193179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.205 [2024-12-14 03:18:11.193185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.205 [2024-12-14 03:18:11.193192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.205 [2024-12-14 03:18:11.193207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.205 qpair failed and we were unable to recover it. 00:36:56.205 [2024-12-14 03:18:11.203152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.205 [2024-12-14 03:18:11.203207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.205 [2024-12-14 03:18:11.203220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.205 [2024-12-14 03:18:11.203227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.205 [2024-12-14 03:18:11.203233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.205 [2024-12-14 03:18:11.203249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.205 qpair failed and we were unable to recover it. 00:36:56.205 [2024-12-14 03:18:11.213156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.205 [2024-12-14 03:18:11.213213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.205 [2024-12-14 03:18:11.213226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.205 [2024-12-14 03:18:11.213234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.205 [2024-12-14 03:18:11.213240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.205 [2024-12-14 03:18:11.213255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.205 qpair failed and we were unable to recover it. 00:36:56.205 [2024-12-14 03:18:11.223170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.205 [2024-12-14 03:18:11.223223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.206 [2024-12-14 03:18:11.223236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.206 [2024-12-14 03:18:11.223243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.206 [2024-12-14 03:18:11.223253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.206 [2024-12-14 03:18:11.223268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.206 qpair failed and we were unable to recover it. 00:36:56.206 [2024-12-14 03:18:11.233208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.206 [2024-12-14 03:18:11.233279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.206 [2024-12-14 03:18:11.233292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.206 [2024-12-14 03:18:11.233300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.206 [2024-12-14 03:18:11.233306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.206 [2024-12-14 03:18:11.233325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.206 qpair failed and we were unable to recover it. 00:36:56.206 [2024-12-14 03:18:11.243223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.206 [2024-12-14 03:18:11.243282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.206 [2024-12-14 03:18:11.243296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.206 [2024-12-14 03:18:11.243303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.206 [2024-12-14 03:18:11.243309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.206 [2024-12-14 03:18:11.243328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.206 qpair failed and we were unable to recover it. 00:36:56.206 [2024-12-14 03:18:11.253307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.206 [2024-12-14 03:18:11.253368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.206 [2024-12-14 03:18:11.253382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.206 [2024-12-14 03:18:11.253388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.206 [2024-12-14 03:18:11.253395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.206 [2024-12-14 03:18:11.253409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.206 qpair failed and we were unable to recover it. 00:36:56.206 [2024-12-14 03:18:11.263360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.206 [2024-12-14 03:18:11.263418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.206 [2024-12-14 03:18:11.263431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.206 [2024-12-14 03:18:11.263439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.206 [2024-12-14 03:18:11.263445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.206 [2024-12-14 03:18:11.263460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.206 qpair failed and we were unable to recover it. 00:36:56.206 [2024-12-14 03:18:11.273238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.206 [2024-12-14 03:18:11.273294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.206 [2024-12-14 03:18:11.273307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.206 [2024-12-14 03:18:11.273319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.206 [2024-12-14 03:18:11.273325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.206 [2024-12-14 03:18:11.273341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.206 qpair failed and we were unable to recover it. 00:36:56.206 [2024-12-14 03:18:11.283350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.206 [2024-12-14 03:18:11.283408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.206 [2024-12-14 03:18:11.283421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.206 [2024-12-14 03:18:11.283428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.206 [2024-12-14 03:18:11.283434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.206 [2024-12-14 03:18:11.283450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.206 qpair failed and we were unable to recover it. 00:36:56.206 [2024-12-14 03:18:11.293419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.206 [2024-12-14 03:18:11.293476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.206 [2024-12-14 03:18:11.293489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.206 [2024-12-14 03:18:11.293496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.206 [2024-12-14 03:18:11.293503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.206 [2024-12-14 03:18:11.293518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.206 qpair failed and we were unable to recover it. 00:36:56.206 [2024-12-14 03:18:11.303444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.206 [2024-12-14 03:18:11.303501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.206 [2024-12-14 03:18:11.303513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.206 [2024-12-14 03:18:11.303520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.206 [2024-12-14 03:18:11.303527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.206 [2024-12-14 03:18:11.303542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.206 qpair failed and we were unable to recover it. 00:36:56.206 [2024-12-14 03:18:11.313425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.206 [2024-12-14 03:18:11.313484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.206 [2024-12-14 03:18:11.313500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.206 [2024-12-14 03:18:11.313507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.206 [2024-12-14 03:18:11.313513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.206 [2024-12-14 03:18:11.313528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.206 qpair failed and we were unable to recover it. 00:36:56.206 [2024-12-14 03:18:11.323449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.206 [2024-12-14 03:18:11.323506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.206 [2024-12-14 03:18:11.323519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.206 [2024-12-14 03:18:11.323525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.206 [2024-12-14 03:18:11.323532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.206 [2024-12-14 03:18:11.323547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.206 qpair failed and we were unable to recover it. 00:36:56.206 [2024-12-14 03:18:11.333466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.206 [2024-12-14 03:18:11.333561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.206 [2024-12-14 03:18:11.333575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.206 [2024-12-14 03:18:11.333581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.206 [2024-12-14 03:18:11.333588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.206 [2024-12-14 03:18:11.333603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.206 qpair failed and we were unable to recover it. 00:36:56.466 [2024-12-14 03:18:11.343519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.466 [2024-12-14 03:18:11.343572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.466 [2024-12-14 03:18:11.343585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.466 [2024-12-14 03:18:11.343591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.466 [2024-12-14 03:18:11.343598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.467 [2024-12-14 03:18:11.343613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.467 qpair failed and we were unable to recover it. 00:36:56.467 [2024-12-14 03:18:11.353506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.467 [2024-12-14 03:18:11.353599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.467 [2024-12-14 03:18:11.353611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.467 [2024-12-14 03:18:11.353618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.467 [2024-12-14 03:18:11.353627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.467 [2024-12-14 03:18:11.353642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.467 qpair failed and we were unable to recover it. 00:36:56.467 [2024-12-14 03:18:11.363600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.467 [2024-12-14 03:18:11.363663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.467 [2024-12-14 03:18:11.363676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.467 [2024-12-14 03:18:11.363683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.467 [2024-12-14 03:18:11.363689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.467 [2024-12-14 03:18:11.363704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.467 qpair failed and we were unable to recover it. 00:36:56.467 [2024-12-14 03:18:11.373611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.467 [2024-12-14 03:18:11.373669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.467 [2024-12-14 03:18:11.373682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.467 [2024-12-14 03:18:11.373689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.467 [2024-12-14 03:18:11.373695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.467 [2024-12-14 03:18:11.373710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.467 qpair failed and we were unable to recover it. 00:36:56.467 [2024-12-14 03:18:11.383627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.467 [2024-12-14 03:18:11.383694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.467 [2024-12-14 03:18:11.383708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.467 [2024-12-14 03:18:11.383715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.467 [2024-12-14 03:18:11.383722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.467 [2024-12-14 03:18:11.383737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.467 qpair failed and we were unable to recover it. 00:36:56.467 [2024-12-14 03:18:11.393714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.467 [2024-12-14 03:18:11.393784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.467 [2024-12-14 03:18:11.393797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.467 [2024-12-14 03:18:11.393803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.467 [2024-12-14 03:18:11.393810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.467 [2024-12-14 03:18:11.393824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.467 qpair failed and we were unable to recover it. 00:36:56.467 [2024-12-14 03:18:11.403693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.467 [2024-12-14 03:18:11.403743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.467 [2024-12-14 03:18:11.403756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.467 [2024-12-14 03:18:11.403763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.467 [2024-12-14 03:18:11.403769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.467 [2024-12-14 03:18:11.403784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.467 qpair failed and we were unable to recover it. 00:36:56.467 [2024-12-14 03:18:11.413775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.467 [2024-12-14 03:18:11.413878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.467 [2024-12-14 03:18:11.413890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.467 [2024-12-14 03:18:11.413897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.467 [2024-12-14 03:18:11.413903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.467 [2024-12-14 03:18:11.413918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.467 qpair failed and we were unable to recover it. 00:36:56.467 [2024-12-14 03:18:11.423744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.467 [2024-12-14 03:18:11.423798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.467 [2024-12-14 03:18:11.423811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.467 [2024-12-14 03:18:11.423818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.467 [2024-12-14 03:18:11.423824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.467 [2024-12-14 03:18:11.423839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.467 qpair failed and we were unable to recover it. 00:36:56.467 [2024-12-14 03:18:11.433778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.467 [2024-12-14 03:18:11.433834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.467 [2024-12-14 03:18:11.433846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.467 [2024-12-14 03:18:11.433853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.467 [2024-12-14 03:18:11.433859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.467 [2024-12-14 03:18:11.433875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.467 qpair failed and we were unable to recover it. 00:36:56.467 [2024-12-14 03:18:11.443810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.467 [2024-12-14 03:18:11.443864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.467 [2024-12-14 03:18:11.443880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.467 [2024-12-14 03:18:11.443888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.467 [2024-12-14 03:18:11.443894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.467 [2024-12-14 03:18:11.443909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.467 qpair failed and we were unable to recover it. 00:36:56.467 [2024-12-14 03:18:11.453839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.467 [2024-12-14 03:18:11.453892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.467 [2024-12-14 03:18:11.453905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.467 [2024-12-14 03:18:11.453911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.467 [2024-12-14 03:18:11.453917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.467 [2024-12-14 03:18:11.453933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.467 qpair failed and we were unable to recover it. 00:36:56.467 [2024-12-14 03:18:11.463879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.467 [2024-12-14 03:18:11.463935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.467 [2024-12-14 03:18:11.463947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.467 [2024-12-14 03:18:11.463954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.467 [2024-12-14 03:18:11.463960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.467 [2024-12-14 03:18:11.463975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.467 qpair failed and we were unable to recover it. 00:36:56.467 [2024-12-14 03:18:11.473943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.467 [2024-12-14 03:18:11.474047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.467 [2024-12-14 03:18:11.474060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.467 [2024-12-14 03:18:11.474067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.467 [2024-12-14 03:18:11.474073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.467 [2024-12-14 03:18:11.474087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.467 qpair failed and we were unable to recover it. 00:36:56.467 [2024-12-14 03:18:11.483916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.468 [2024-12-14 03:18:11.483968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.468 [2024-12-14 03:18:11.483981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.468 [2024-12-14 03:18:11.483993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.468 [2024-12-14 03:18:11.483999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.468 [2024-12-14 03:18:11.484015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.468 qpair failed and we were unable to recover it. 00:36:56.468 [2024-12-14 03:18:11.493959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.468 [2024-12-14 03:18:11.494014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.468 [2024-12-14 03:18:11.494028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.468 [2024-12-14 03:18:11.494035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.468 [2024-12-14 03:18:11.494042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.468 [2024-12-14 03:18:11.494057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.468 qpair failed and we were unable to recover it. 00:36:56.468 [2024-12-14 03:18:11.503977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.468 [2024-12-14 03:18:11.504031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.468 [2024-12-14 03:18:11.504044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.468 [2024-12-14 03:18:11.504051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.468 [2024-12-14 03:18:11.504057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.468 [2024-12-14 03:18:11.504072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.468 qpair failed and we were unable to recover it. 00:36:56.468 [2024-12-14 03:18:11.514030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.468 [2024-12-14 03:18:11.514087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.468 [2024-12-14 03:18:11.514100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.468 [2024-12-14 03:18:11.514107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.468 [2024-12-14 03:18:11.514113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.468 [2024-12-14 03:18:11.514128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.468 qpair failed and we were unable to recover it. 00:36:56.468 [2024-12-14 03:18:11.524039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.468 [2024-12-14 03:18:11.524092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.468 [2024-12-14 03:18:11.524105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.468 [2024-12-14 03:18:11.524112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.468 [2024-12-14 03:18:11.524118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.468 [2024-12-14 03:18:11.524137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.468 qpair failed and we were unable to recover it. 00:36:56.468 [2024-12-14 03:18:11.534127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.468 [2024-12-14 03:18:11.534180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.468 [2024-12-14 03:18:11.534193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.468 [2024-12-14 03:18:11.534199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.468 [2024-12-14 03:18:11.534205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.468 [2024-12-14 03:18:11.534221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.468 qpair failed and we were unable to recover it. 00:36:56.468 [2024-12-14 03:18:11.544105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.468 [2024-12-14 03:18:11.544157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.468 [2024-12-14 03:18:11.544170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.468 [2024-12-14 03:18:11.544177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.468 [2024-12-14 03:18:11.544183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.468 [2024-12-14 03:18:11.544198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.468 qpair failed and we were unable to recover it. 00:36:56.468 [2024-12-14 03:18:11.554122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.468 [2024-12-14 03:18:11.554176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.468 [2024-12-14 03:18:11.554189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.468 [2024-12-14 03:18:11.554196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.468 [2024-12-14 03:18:11.554202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.468 [2024-12-14 03:18:11.554218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.468 qpair failed and we were unable to recover it. 00:36:56.468 [2024-12-14 03:18:11.564144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.468 [2024-12-14 03:18:11.564198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.468 [2024-12-14 03:18:11.564210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.468 [2024-12-14 03:18:11.564217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.468 [2024-12-14 03:18:11.564223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.468 [2024-12-14 03:18:11.564239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.468 qpair failed and we were unable to recover it. 00:36:56.468 [2024-12-14 03:18:11.574209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.468 [2024-12-14 03:18:11.574307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.468 [2024-12-14 03:18:11.574324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.468 [2024-12-14 03:18:11.574331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.468 [2024-12-14 03:18:11.574338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.468 [2024-12-14 03:18:11.574353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.468 qpair failed and we were unable to recover it. 00:36:56.468 [2024-12-14 03:18:11.584237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.468 [2024-12-14 03:18:11.584345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.468 [2024-12-14 03:18:11.584358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.468 [2024-12-14 03:18:11.584365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.468 [2024-12-14 03:18:11.584371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.468 [2024-12-14 03:18:11.584387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.468 qpair failed and we were unable to recover it. 00:36:56.468 [2024-12-14 03:18:11.594241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.468 [2024-12-14 03:18:11.594297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.468 [2024-12-14 03:18:11.594311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.468 [2024-12-14 03:18:11.594322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.468 [2024-12-14 03:18:11.594329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.468 [2024-12-14 03:18:11.594345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.468 qpair failed and we were unable to recover it. 00:36:56.729 [2024-12-14 03:18:11.604293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.729 [2024-12-14 03:18:11.604352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.729 [2024-12-14 03:18:11.604365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.729 [2024-12-14 03:18:11.604373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.729 [2024-12-14 03:18:11.604379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.729 [2024-12-14 03:18:11.604395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.729 qpair failed and we were unable to recover it. 00:36:56.729 [2024-12-14 03:18:11.614282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.729 [2024-12-14 03:18:11.614366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.729 [2024-12-14 03:18:11.614379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.729 [2024-12-14 03:18:11.614389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.729 [2024-12-14 03:18:11.614396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.729 [2024-12-14 03:18:11.614411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.729 qpair failed and we were unable to recover it. 00:36:56.729 [2024-12-14 03:18:11.624316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.729 [2024-12-14 03:18:11.624376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.729 [2024-12-14 03:18:11.624389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.729 [2024-12-14 03:18:11.624396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.729 [2024-12-14 03:18:11.624402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.729 [2024-12-14 03:18:11.624417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.729 qpair failed and we were unable to recover it. 00:36:56.729 [2024-12-14 03:18:11.634348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.729 [2024-12-14 03:18:11.634417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.729 [2024-12-14 03:18:11.634430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.729 [2024-12-14 03:18:11.634437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.729 [2024-12-14 03:18:11.634443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.729 [2024-12-14 03:18:11.634459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.729 qpair failed and we were unable to recover it. 00:36:56.729 [2024-12-14 03:18:11.644376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.729 [2024-12-14 03:18:11.644432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.729 [2024-12-14 03:18:11.644445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.729 [2024-12-14 03:18:11.644452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.729 [2024-12-14 03:18:11.644459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.729 [2024-12-14 03:18:11.644474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.729 qpair failed and we were unable to recover it. 00:36:56.729 [2024-12-14 03:18:11.654394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.729 [2024-12-14 03:18:11.654447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.729 [2024-12-14 03:18:11.654459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.729 [2024-12-14 03:18:11.654466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.729 [2024-12-14 03:18:11.654473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.729 [2024-12-14 03:18:11.654492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.729 qpair failed and we were unable to recover it. 00:36:56.729 [2024-12-14 03:18:11.664351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.729 [2024-12-14 03:18:11.664405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.729 [2024-12-14 03:18:11.664417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.729 [2024-12-14 03:18:11.664424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.729 [2024-12-14 03:18:11.664430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.729 [2024-12-14 03:18:11.664446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.729 qpair failed and we were unable to recover it. 00:36:56.729 [2024-12-14 03:18:11.674478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.729 [2024-12-14 03:18:11.674572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.729 [2024-12-14 03:18:11.674585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.729 [2024-12-14 03:18:11.674592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.729 [2024-12-14 03:18:11.674598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.729 [2024-12-14 03:18:11.674613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.729 qpair failed and we were unable to recover it. 00:36:56.729 [2024-12-14 03:18:11.684550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.729 [2024-12-14 03:18:11.684637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.729 [2024-12-14 03:18:11.684649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.729 [2024-12-14 03:18:11.684656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.729 [2024-12-14 03:18:11.684662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.729 [2024-12-14 03:18:11.684677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.729 qpair failed and we were unable to recover it. 00:36:56.730 [2024-12-14 03:18:11.694516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.730 [2024-12-14 03:18:11.694607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.730 [2024-12-14 03:18:11.694620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.730 [2024-12-14 03:18:11.694627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.730 [2024-12-14 03:18:11.694633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.730 [2024-12-14 03:18:11.694648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.730 qpair failed and we were unable to recover it. 00:36:56.730 [2024-12-14 03:18:11.704588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.730 [2024-12-14 03:18:11.704791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.730 [2024-12-14 03:18:11.704807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.730 [2024-12-14 03:18:11.704814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.730 [2024-12-14 03:18:11.704821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.730 [2024-12-14 03:18:11.704837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.730 qpair failed and we were unable to recover it. 00:36:56.730 [2024-12-14 03:18:11.714577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.730 [2024-12-14 03:18:11.714634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.730 [2024-12-14 03:18:11.714647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.730 [2024-12-14 03:18:11.714653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.730 [2024-12-14 03:18:11.714660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.730 [2024-12-14 03:18:11.714675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.730 qpair failed and we were unable to recover it. 00:36:56.730 [2024-12-14 03:18:11.724593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.730 [2024-12-14 03:18:11.724654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.730 [2024-12-14 03:18:11.724667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.730 [2024-12-14 03:18:11.724674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.730 [2024-12-14 03:18:11.724680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.730 [2024-12-14 03:18:11.724695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.730 qpair failed and we were unable to recover it. 00:36:56.730 [2024-12-14 03:18:11.734619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.730 [2024-12-14 03:18:11.734674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.730 [2024-12-14 03:18:11.734687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.730 [2024-12-14 03:18:11.734694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.730 [2024-12-14 03:18:11.734701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.730 [2024-12-14 03:18:11.734716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.730 qpair failed and we were unable to recover it. 00:36:56.730 [2024-12-14 03:18:11.744693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.730 [2024-12-14 03:18:11.744747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.730 [2024-12-14 03:18:11.744763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.730 [2024-12-14 03:18:11.744771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.730 [2024-12-14 03:18:11.744777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.730 [2024-12-14 03:18:11.744792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.730 qpair failed and we were unable to recover it. 00:36:56.730 [2024-12-14 03:18:11.754737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.730 [2024-12-14 03:18:11.754843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.730 [2024-12-14 03:18:11.754857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.730 [2024-12-14 03:18:11.754863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.730 [2024-12-14 03:18:11.754869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.730 [2024-12-14 03:18:11.754885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.730 qpair failed and we were unable to recover it. 00:36:56.730 [2024-12-14 03:18:11.764707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.730 [2024-12-14 03:18:11.764762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.730 [2024-12-14 03:18:11.764775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.730 [2024-12-14 03:18:11.764781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.730 [2024-12-14 03:18:11.764788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.730 [2024-12-14 03:18:11.764803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.730 qpair failed and we were unable to recover it. 00:36:56.730 [2024-12-14 03:18:11.774730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.730 [2024-12-14 03:18:11.774828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.730 [2024-12-14 03:18:11.774841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.730 [2024-12-14 03:18:11.774847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.730 [2024-12-14 03:18:11.774853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.730 [2024-12-14 03:18:11.774869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.730 qpair failed and we were unable to recover it. 00:36:56.730 [2024-12-14 03:18:11.784758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.730 [2024-12-14 03:18:11.784812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.730 [2024-12-14 03:18:11.784825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.730 [2024-12-14 03:18:11.784831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.730 [2024-12-14 03:18:11.784841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.730 [2024-12-14 03:18:11.784856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.730 qpair failed and we were unable to recover it. 00:36:56.730 [2024-12-14 03:18:11.794816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.730 [2024-12-14 03:18:11.794878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.730 [2024-12-14 03:18:11.794891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.730 [2024-12-14 03:18:11.794898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.730 [2024-12-14 03:18:11.794904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.730 [2024-12-14 03:18:11.794919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.730 qpair failed and we were unable to recover it. 00:36:56.730 [2024-12-14 03:18:11.804839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.730 [2024-12-14 03:18:11.804896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.730 [2024-12-14 03:18:11.804909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.730 [2024-12-14 03:18:11.804916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.730 [2024-12-14 03:18:11.804922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.730 [2024-12-14 03:18:11.804937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.730 qpair failed and we were unable to recover it. 00:36:56.730 [2024-12-14 03:18:11.814851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.730 [2024-12-14 03:18:11.814957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.730 [2024-12-14 03:18:11.814971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.730 [2024-12-14 03:18:11.814978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.730 [2024-12-14 03:18:11.814984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.730 [2024-12-14 03:18:11.814999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.730 qpair failed and we were unable to recover it. 00:36:56.730 [2024-12-14 03:18:11.824884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.730 [2024-12-14 03:18:11.824950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.731 [2024-12-14 03:18:11.824962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.731 [2024-12-14 03:18:11.824969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.731 [2024-12-14 03:18:11.824976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.731 [2024-12-14 03:18:11.824991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.731 qpair failed and we were unable to recover it. 00:36:56.731 [2024-12-14 03:18:11.834930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.731 [2024-12-14 03:18:11.834985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.731 [2024-12-14 03:18:11.834998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.731 [2024-12-14 03:18:11.835005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.731 [2024-12-14 03:18:11.835011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.731 [2024-12-14 03:18:11.835026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.731 qpair failed and we were unable to recover it. 00:36:56.731 [2024-12-14 03:18:11.844959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.731 [2024-12-14 03:18:11.845012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.731 [2024-12-14 03:18:11.845024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.731 [2024-12-14 03:18:11.845031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.731 [2024-12-14 03:18:11.845037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.731 [2024-12-14 03:18:11.845052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.731 qpair failed and we were unable to recover it. 00:36:56.731 [2024-12-14 03:18:11.854926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.731 [2024-12-14 03:18:11.854976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.731 [2024-12-14 03:18:11.854989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.731 [2024-12-14 03:18:11.854996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.731 [2024-12-14 03:18:11.855003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.731 [2024-12-14 03:18:11.855019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.731 qpair failed and we were unable to recover it. 00:36:56.991 [2024-12-14 03:18:11.864965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.991 [2024-12-14 03:18:11.865019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.991 [2024-12-14 03:18:11.865032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.991 [2024-12-14 03:18:11.865038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.991 [2024-12-14 03:18:11.865045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.991 [2024-12-14 03:18:11.865060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.991 qpair failed and we were unable to recover it. 00:36:56.991 [2024-12-14 03:18:11.875001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.991 [2024-12-14 03:18:11.875057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.991 [2024-12-14 03:18:11.875074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.991 [2024-12-14 03:18:11.875081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.991 [2024-12-14 03:18:11.875087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.991 [2024-12-14 03:18:11.875102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.991 qpair failed and we were unable to recover it. 00:36:56.991 [2024-12-14 03:18:11.885040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.991 [2024-12-14 03:18:11.885108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.991 [2024-12-14 03:18:11.885122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.991 [2024-12-14 03:18:11.885129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.991 [2024-12-14 03:18:11.885135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.991 [2024-12-14 03:18:11.885151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.991 qpair failed and we were unable to recover it. 00:36:56.991 [2024-12-14 03:18:11.894982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.991 [2024-12-14 03:18:11.895038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.991 [2024-12-14 03:18:11.895051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.991 [2024-12-14 03:18:11.895058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.991 [2024-12-14 03:18:11.895064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.991 [2024-12-14 03:18:11.895079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.991 qpair failed and we were unable to recover it. 00:36:56.991 [2024-12-14 03:18:11.905129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.991 [2024-12-14 03:18:11.905189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.991 [2024-12-14 03:18:11.905205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.991 [2024-12-14 03:18:11.905212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.991 [2024-12-14 03:18:11.905218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.991 [2024-12-14 03:18:11.905235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.991 qpair failed and we were unable to recover it. 00:36:56.991 [2024-12-14 03:18:11.915169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.991 [2024-12-14 03:18:11.915226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.991 [2024-12-14 03:18:11.915240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.991 [2024-12-14 03:18:11.915247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.991 [2024-12-14 03:18:11.915257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.991 [2024-12-14 03:18:11.915273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.991 qpair failed and we were unable to recover it. 00:36:56.991 [2024-12-14 03:18:11.925166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.991 [2024-12-14 03:18:11.925221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.991 [2024-12-14 03:18:11.925235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.991 [2024-12-14 03:18:11.925243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.991 [2024-12-14 03:18:11.925249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.991 [2024-12-14 03:18:11.925265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.991 qpair failed and we were unable to recover it. 00:36:56.991 [2024-12-14 03:18:11.935167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.991 [2024-12-14 03:18:11.935218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.991 [2024-12-14 03:18:11.935231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.991 [2024-12-14 03:18:11.935238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.991 [2024-12-14 03:18:11.935245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.991 [2024-12-14 03:18:11.935260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.991 qpair failed and we were unable to recover it. 00:36:56.991 [2024-12-14 03:18:11.945201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.991 [2024-12-14 03:18:11.945267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.991 [2024-12-14 03:18:11.945281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.991 [2024-12-14 03:18:11.945288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.991 [2024-12-14 03:18:11.945294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.991 [2024-12-14 03:18:11.945310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.991 qpair failed and we were unable to recover it. 00:36:56.991 [2024-12-14 03:18:11.955173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.991 [2024-12-14 03:18:11.955226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.991 [2024-12-14 03:18:11.955239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.991 [2024-12-14 03:18:11.955246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.991 [2024-12-14 03:18:11.955252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.991 [2024-12-14 03:18:11.955267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.991 qpair failed and we were unable to recover it. 00:36:56.991 [2024-12-14 03:18:11.965287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.991 [2024-12-14 03:18:11.965348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.991 [2024-12-14 03:18:11.965361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.991 [2024-12-14 03:18:11.965368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.991 [2024-12-14 03:18:11.965375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.991 [2024-12-14 03:18:11.965390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.991 qpair failed and we were unable to recover it. 00:36:56.991 [2024-12-14 03:18:11.975289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.991 [2024-12-14 03:18:11.975358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.991 [2024-12-14 03:18:11.975372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.992 [2024-12-14 03:18:11.975378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.992 [2024-12-14 03:18:11.975386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.992 [2024-12-14 03:18:11.975401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.992 qpair failed and we were unable to recover it. 00:36:56.992 [2024-12-14 03:18:11.985323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.992 [2024-12-14 03:18:11.985375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.992 [2024-12-14 03:18:11.985388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.992 [2024-12-14 03:18:11.985395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.992 [2024-12-14 03:18:11.985401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.992 [2024-12-14 03:18:11.985417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.992 qpair failed and we were unable to recover it. 00:36:56.992 [2024-12-14 03:18:11.995354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.992 [2024-12-14 03:18:11.995411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.992 [2024-12-14 03:18:11.995424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.992 [2024-12-14 03:18:11.995430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.992 [2024-12-14 03:18:11.995437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.992 [2024-12-14 03:18:11.995452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.992 qpair failed and we were unable to recover it. 00:36:56.992 [2024-12-14 03:18:12.005386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.992 [2024-12-14 03:18:12.005476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.992 [2024-12-14 03:18:12.005493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.992 [2024-12-14 03:18:12.005499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.992 [2024-12-14 03:18:12.005505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.992 [2024-12-14 03:18:12.005521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.992 qpair failed and we were unable to recover it. 00:36:56.992 [2024-12-14 03:18:12.015409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.992 [2024-12-14 03:18:12.015464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.992 [2024-12-14 03:18:12.015477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.992 [2024-12-14 03:18:12.015483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.992 [2024-12-14 03:18:12.015490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.992 [2024-12-14 03:18:12.015505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.992 qpair failed and we were unable to recover it. 00:36:56.992 [2024-12-14 03:18:12.025476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.992 [2024-12-14 03:18:12.025531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.992 [2024-12-14 03:18:12.025544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.992 [2024-12-14 03:18:12.025551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.992 [2024-12-14 03:18:12.025558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.992 [2024-12-14 03:18:12.025572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.992 qpair failed and we were unable to recover it. 00:36:56.992 [2024-12-14 03:18:12.035469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.992 [2024-12-14 03:18:12.035522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.992 [2024-12-14 03:18:12.035535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.992 [2024-12-14 03:18:12.035542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.992 [2024-12-14 03:18:12.035548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.992 [2024-12-14 03:18:12.035563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.992 qpair failed and we were unable to recover it. 00:36:56.992 [2024-12-14 03:18:12.045509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.992 [2024-12-14 03:18:12.045567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.992 [2024-12-14 03:18:12.045579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.992 [2024-12-14 03:18:12.045589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.992 [2024-12-14 03:18:12.045595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.992 [2024-12-14 03:18:12.045610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.992 qpair failed and we were unable to recover it. 00:36:56.992 [2024-12-14 03:18:12.055499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.992 [2024-12-14 03:18:12.055555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.992 [2024-12-14 03:18:12.055568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.992 [2024-12-14 03:18:12.055575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.992 [2024-12-14 03:18:12.055581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.992 [2024-12-14 03:18:12.055596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.992 qpair failed and we were unable to recover it. 00:36:56.992 [2024-12-14 03:18:12.065571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.992 [2024-12-14 03:18:12.065638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.992 [2024-12-14 03:18:12.065651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.992 [2024-12-14 03:18:12.065658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.992 [2024-12-14 03:18:12.065664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.992 [2024-12-14 03:18:12.065679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.992 qpair failed and we were unable to recover it. 00:36:56.992 [2024-12-14 03:18:12.075637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.992 [2024-12-14 03:18:12.075709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.992 [2024-12-14 03:18:12.075722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.992 [2024-12-14 03:18:12.075729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.992 [2024-12-14 03:18:12.075735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.992 [2024-12-14 03:18:12.075751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.992 qpair failed and we were unable to recover it. 00:36:56.992 [2024-12-14 03:18:12.085638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.992 [2024-12-14 03:18:12.085738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.992 [2024-12-14 03:18:12.085751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.992 [2024-12-14 03:18:12.085758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.992 [2024-12-14 03:18:12.085764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.992 [2024-12-14 03:18:12.085783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.992 qpair failed and we were unable to recover it. 00:36:56.992 [2024-12-14 03:18:12.095646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.992 [2024-12-14 03:18:12.095700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.992 [2024-12-14 03:18:12.095713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.992 [2024-12-14 03:18:12.095720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.992 [2024-12-14 03:18:12.095727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.992 [2024-12-14 03:18:12.095742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.992 qpair failed and we were unable to recover it. 00:36:56.992 [2024-12-14 03:18:12.105654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.992 [2024-12-14 03:18:12.105728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.992 [2024-12-14 03:18:12.105741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.992 [2024-12-14 03:18:12.105748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.992 [2024-12-14 03:18:12.105754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.992 [2024-12-14 03:18:12.105769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.993 qpair failed and we were unable to recover it. 00:36:56.993 [2024-12-14 03:18:12.115706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.993 [2024-12-14 03:18:12.115762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.993 [2024-12-14 03:18:12.115775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.993 [2024-12-14 03:18:12.115782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.993 [2024-12-14 03:18:12.115788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:56.993 [2024-12-14 03:18:12.115803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:56.993 qpair failed and we were unable to recover it. 00:36:57.253 [2024-12-14 03:18:12.125776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.253 [2024-12-14 03:18:12.125846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.253 [2024-12-14 03:18:12.125859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.253 [2024-12-14 03:18:12.125866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.253 [2024-12-14 03:18:12.125872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.253 [2024-12-14 03:18:12.125887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.253 qpair failed and we were unable to recover it. 00:36:57.253 [2024-12-14 03:18:12.135777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.253 [2024-12-14 03:18:12.135837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.253 [2024-12-14 03:18:12.135850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.253 [2024-12-14 03:18:12.135857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.253 [2024-12-14 03:18:12.135863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.253 [2024-12-14 03:18:12.135878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.253 qpair failed and we were unable to recover it. 00:36:57.253 [2024-12-14 03:18:12.145742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.253 [2024-12-14 03:18:12.145806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.253 [2024-12-14 03:18:12.145819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.253 [2024-12-14 03:18:12.145826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.253 [2024-12-14 03:18:12.145832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.253 [2024-12-14 03:18:12.145847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.253 qpair failed and we were unable to recover it. 00:36:57.253 [2024-12-14 03:18:12.155816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.253 [2024-12-14 03:18:12.155871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.253 [2024-12-14 03:18:12.155885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.253 [2024-12-14 03:18:12.155891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.253 [2024-12-14 03:18:12.155898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.253 [2024-12-14 03:18:12.155913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.253 qpair failed and we were unable to recover it. 00:36:57.253 [2024-12-14 03:18:12.165846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.253 [2024-12-14 03:18:12.165913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.253 [2024-12-14 03:18:12.165926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.253 [2024-12-14 03:18:12.165933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.253 [2024-12-14 03:18:12.165939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.253 [2024-12-14 03:18:12.165954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.253 qpair failed and we were unable to recover it. 00:36:57.253 [2024-12-14 03:18:12.175848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.253 [2024-12-14 03:18:12.175903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.253 [2024-12-14 03:18:12.175916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.253 [2024-12-14 03:18:12.175927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.253 [2024-12-14 03:18:12.175933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.253 [2024-12-14 03:18:12.175949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.253 qpair failed and we were unable to recover it. 00:36:57.253 [2024-12-14 03:18:12.185890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.253 [2024-12-14 03:18:12.185980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.253 [2024-12-14 03:18:12.185993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.253 [2024-12-14 03:18:12.186000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.253 [2024-12-14 03:18:12.186007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.253 [2024-12-14 03:18:12.186021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.253 qpair failed and we were unable to recover it. 00:36:57.253 [2024-12-14 03:18:12.195922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.253 [2024-12-14 03:18:12.196002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.253 [2024-12-14 03:18:12.196015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.253 [2024-12-14 03:18:12.196022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.253 [2024-12-14 03:18:12.196029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.253 [2024-12-14 03:18:12.196044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.253 qpair failed and we were unable to recover it. 00:36:57.253 [2024-12-14 03:18:12.205965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.253 [2024-12-14 03:18:12.206019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.253 [2024-12-14 03:18:12.206032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.253 [2024-12-14 03:18:12.206040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.253 [2024-12-14 03:18:12.206048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.253 [2024-12-14 03:18:12.206064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.253 qpair failed and we were unable to recover it. 00:36:57.253 [2024-12-14 03:18:12.216031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.253 [2024-12-14 03:18:12.216094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.253 [2024-12-14 03:18:12.216108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.253 [2024-12-14 03:18:12.216115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.253 [2024-12-14 03:18:12.216122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.253 [2024-12-14 03:18:12.216141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.253 qpair failed and we were unable to recover it. 00:36:57.253 [2024-12-14 03:18:12.226059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.253 [2024-12-14 03:18:12.226154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.253 [2024-12-14 03:18:12.226167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.253 [2024-12-14 03:18:12.226174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.253 [2024-12-14 03:18:12.226181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.253 [2024-12-14 03:18:12.226196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.253 qpair failed and we were unable to recover it. 00:36:57.253 [2024-12-14 03:18:12.236061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.254 [2024-12-14 03:18:12.236121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.254 [2024-12-14 03:18:12.236134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.254 [2024-12-14 03:18:12.236141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.254 [2024-12-14 03:18:12.236149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.254 [2024-12-14 03:18:12.236164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.254 qpair failed and we were unable to recover it. 00:36:57.254 [2024-12-14 03:18:12.246080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.254 [2024-12-14 03:18:12.246153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.254 [2024-12-14 03:18:12.246166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.254 [2024-12-14 03:18:12.246173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.254 [2024-12-14 03:18:12.246179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.254 [2024-12-14 03:18:12.246195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.254 qpair failed and we were unable to recover it. 00:36:57.254 [2024-12-14 03:18:12.256110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.254 [2024-12-14 03:18:12.256172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.254 [2024-12-14 03:18:12.256185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.254 [2024-12-14 03:18:12.256192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.254 [2024-12-14 03:18:12.256198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.254 [2024-12-14 03:18:12.256214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.254 qpair failed and we were unable to recover it. 00:36:57.254 [2024-12-14 03:18:12.266049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.254 [2024-12-14 03:18:12.266136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.254 [2024-12-14 03:18:12.266150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.254 [2024-12-14 03:18:12.266157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.254 [2024-12-14 03:18:12.266163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.254 [2024-12-14 03:18:12.266178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.254 qpair failed and we were unable to recover it. 00:36:57.254 [2024-12-14 03:18:12.276165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.254 [2024-12-14 03:18:12.276218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.254 [2024-12-14 03:18:12.276231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.254 [2024-12-14 03:18:12.276238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.254 [2024-12-14 03:18:12.276245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.254 [2024-12-14 03:18:12.276261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.254 qpair failed and we were unable to recover it. 00:36:57.254 [2024-12-14 03:18:12.286231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.254 [2024-12-14 03:18:12.286294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.254 [2024-12-14 03:18:12.286307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.254 [2024-12-14 03:18:12.286317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.254 [2024-12-14 03:18:12.286323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.254 [2024-12-14 03:18:12.286339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.254 qpair failed and we were unable to recover it. 00:36:57.254 [2024-12-14 03:18:12.296209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.254 [2024-12-14 03:18:12.296266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.254 [2024-12-14 03:18:12.296279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.254 [2024-12-14 03:18:12.296286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.254 [2024-12-14 03:18:12.296293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.254 [2024-12-14 03:18:12.296308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.254 qpair failed and we were unable to recover it. 00:36:57.254 [2024-12-14 03:18:12.306280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.254 [2024-12-14 03:18:12.306345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.254 [2024-12-14 03:18:12.306362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.254 [2024-12-14 03:18:12.306369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.254 [2024-12-14 03:18:12.306375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.254 [2024-12-14 03:18:12.306390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.254 qpair failed and we were unable to recover it. 00:36:57.254 [2024-12-14 03:18:12.316269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.254 [2024-12-14 03:18:12.316370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.254 [2024-12-14 03:18:12.316383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.254 [2024-12-14 03:18:12.316390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.254 [2024-12-14 03:18:12.316396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.254 [2024-12-14 03:18:12.316412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.254 qpair failed and we were unable to recover it. 00:36:57.254 [2024-12-14 03:18:12.326287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.254 [2024-12-14 03:18:12.326342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.254 [2024-12-14 03:18:12.326356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.254 [2024-12-14 03:18:12.326363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.254 [2024-12-14 03:18:12.326369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.254 [2024-12-14 03:18:12.326384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.254 qpair failed and we were unable to recover it. 00:36:57.254 [2024-12-14 03:18:12.336243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.254 [2024-12-14 03:18:12.336301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.254 [2024-12-14 03:18:12.336318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.254 [2024-12-14 03:18:12.336326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.254 [2024-12-14 03:18:12.336332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.254 [2024-12-14 03:18:12.336347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.254 qpair failed and we were unable to recover it. 00:36:57.254 [2024-12-14 03:18:12.346349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.254 [2024-12-14 03:18:12.346406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.254 [2024-12-14 03:18:12.346419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.254 [2024-12-14 03:18:12.346426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.254 [2024-12-14 03:18:12.346436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.254 [2024-12-14 03:18:12.346451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.254 qpair failed and we were unable to recover it. 00:36:57.254 [2024-12-14 03:18:12.356452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.254 [2024-12-14 03:18:12.356556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.254 [2024-12-14 03:18:12.356568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.254 [2024-12-14 03:18:12.356575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.254 [2024-12-14 03:18:12.356581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.254 [2024-12-14 03:18:12.356596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.254 qpair failed and we were unable to recover it. 00:36:57.254 [2024-12-14 03:18:12.366456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.254 [2024-12-14 03:18:12.366522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.254 [2024-12-14 03:18:12.366535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.255 [2024-12-14 03:18:12.366542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.255 [2024-12-14 03:18:12.366548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.255 [2024-12-14 03:18:12.366563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.255 qpair failed and we were unable to recover it. 00:36:57.255 [2024-12-14 03:18:12.376457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.255 [2024-12-14 03:18:12.376508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.255 [2024-12-14 03:18:12.376520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.255 [2024-12-14 03:18:12.376527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.255 [2024-12-14 03:18:12.376534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.255 [2024-12-14 03:18:12.376549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.255 qpair failed and we were unable to recover it. 00:36:57.514 [2024-12-14 03:18:12.386514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.514 [2024-12-14 03:18:12.386567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.514 [2024-12-14 03:18:12.386580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.514 [2024-12-14 03:18:12.386587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.514 [2024-12-14 03:18:12.386593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.514 [2024-12-14 03:18:12.386608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.514 qpair failed and we were unable to recover it. 00:36:57.514 [2024-12-14 03:18:12.396507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.514 [2024-12-14 03:18:12.396589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.514 [2024-12-14 03:18:12.396603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.514 [2024-12-14 03:18:12.396610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.514 [2024-12-14 03:18:12.396616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.514 [2024-12-14 03:18:12.396631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.514 qpair failed and we were unable to recover it. 00:36:57.514 [2024-12-14 03:18:12.406530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.514 [2024-12-14 03:18:12.406587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.514 [2024-12-14 03:18:12.406600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.514 [2024-12-14 03:18:12.406607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.514 [2024-12-14 03:18:12.406614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.515 [2024-12-14 03:18:12.406629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.515 qpair failed and we were unable to recover it. 00:36:57.515 [2024-12-14 03:18:12.416556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.515 [2024-12-14 03:18:12.416609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.515 [2024-12-14 03:18:12.416622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.515 [2024-12-14 03:18:12.416629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.515 [2024-12-14 03:18:12.416636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.515 [2024-12-14 03:18:12.416651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.515 qpair failed and we were unable to recover it. 00:36:57.515 [2024-12-14 03:18:12.426583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.515 [2024-12-14 03:18:12.426638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.515 [2024-12-14 03:18:12.426651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.515 [2024-12-14 03:18:12.426658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.515 [2024-12-14 03:18:12.426664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.515 [2024-12-14 03:18:12.426679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.515 qpair failed and we were unable to recover it. 00:36:57.515 [2024-12-14 03:18:12.436660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.515 [2024-12-14 03:18:12.436729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.515 [2024-12-14 03:18:12.436745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.515 [2024-12-14 03:18:12.436752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.515 [2024-12-14 03:18:12.436758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.515 [2024-12-14 03:18:12.436774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.515 qpair failed and we were unable to recover it. 00:36:57.515 [2024-12-14 03:18:12.446700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.515 [2024-12-14 03:18:12.446756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.515 [2024-12-14 03:18:12.446769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.515 [2024-12-14 03:18:12.446776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.515 [2024-12-14 03:18:12.446782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.515 [2024-12-14 03:18:12.446797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.515 qpair failed and we were unable to recover it. 00:36:57.515 [2024-12-14 03:18:12.456665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.515 [2024-12-14 03:18:12.456766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.515 [2024-12-14 03:18:12.456780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.515 [2024-12-14 03:18:12.456786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.515 [2024-12-14 03:18:12.456793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.515 [2024-12-14 03:18:12.456808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.515 qpair failed and we were unable to recover it. 00:36:57.515 [2024-12-14 03:18:12.466695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.515 [2024-12-14 03:18:12.466747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.515 [2024-12-14 03:18:12.466760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.515 [2024-12-14 03:18:12.466767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.515 [2024-12-14 03:18:12.466773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.515 [2024-12-14 03:18:12.466788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.515 qpair failed and we were unable to recover it. 00:36:57.515 [2024-12-14 03:18:12.476747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.515 [2024-12-14 03:18:12.476827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.515 [2024-12-14 03:18:12.476840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.515 [2024-12-14 03:18:12.476847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.515 [2024-12-14 03:18:12.476857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.515 [2024-12-14 03:18:12.476872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.515 qpair failed and we were unable to recover it. 00:36:57.515 [2024-12-14 03:18:12.486754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.515 [2024-12-14 03:18:12.486856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.515 [2024-12-14 03:18:12.486869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.515 [2024-12-14 03:18:12.486875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.515 [2024-12-14 03:18:12.486881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.515 [2024-12-14 03:18:12.486897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.515 qpair failed and we were unable to recover it. 00:36:57.515 [2024-12-14 03:18:12.496785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.515 [2024-12-14 03:18:12.496839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.515 [2024-12-14 03:18:12.496852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.515 [2024-12-14 03:18:12.496859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.515 [2024-12-14 03:18:12.496865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.515 [2024-12-14 03:18:12.496881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.515 qpair failed and we were unable to recover it. 00:36:57.515 [2024-12-14 03:18:12.506805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.515 [2024-12-14 03:18:12.506857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.515 [2024-12-14 03:18:12.506870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.515 [2024-12-14 03:18:12.506877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.515 [2024-12-14 03:18:12.506883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.515 [2024-12-14 03:18:12.506898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.515 qpair failed and we were unable to recover it. 00:36:57.515 [2024-12-14 03:18:12.516852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.515 [2024-12-14 03:18:12.516908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.515 [2024-12-14 03:18:12.516922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.515 [2024-12-14 03:18:12.516928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.515 [2024-12-14 03:18:12.516934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.515 [2024-12-14 03:18:12.516949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.515 qpair failed and we were unable to recover it. 00:36:57.515 [2024-12-14 03:18:12.526866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.515 [2024-12-14 03:18:12.526922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.515 [2024-12-14 03:18:12.526935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.515 [2024-12-14 03:18:12.526942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.515 [2024-12-14 03:18:12.526948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.515 [2024-12-14 03:18:12.526963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.515 qpair failed and we were unable to recover it. 00:36:57.515 [2024-12-14 03:18:12.536838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.515 [2024-12-14 03:18:12.536889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.515 [2024-12-14 03:18:12.536902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.515 [2024-12-14 03:18:12.536909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.515 [2024-12-14 03:18:12.536916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.515 [2024-12-14 03:18:12.536931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.515 qpair failed and we were unable to recover it. 00:36:57.515 [2024-12-14 03:18:12.546941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.515 [2024-12-14 03:18:12.547012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.515 [2024-12-14 03:18:12.547025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.515 [2024-12-14 03:18:12.547032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.515 [2024-12-14 03:18:12.547038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.515 [2024-12-14 03:18:12.547053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.515 qpair failed and we were unable to recover it. 00:36:57.515 [2024-12-14 03:18:12.556963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.515 [2024-12-14 03:18:12.557048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.515 [2024-12-14 03:18:12.557061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.515 [2024-12-14 03:18:12.557067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.515 [2024-12-14 03:18:12.557073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.515 [2024-12-14 03:18:12.557089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.515 qpair failed and we were unable to recover it. 00:36:57.515 [2024-12-14 03:18:12.567000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.515 [2024-12-14 03:18:12.567075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.515 [2024-12-14 03:18:12.567091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.515 [2024-12-14 03:18:12.567098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.515 [2024-12-14 03:18:12.567104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.515 [2024-12-14 03:18:12.567118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.515 qpair failed and we were unable to recover it. 00:36:57.515 [2024-12-14 03:18:12.577004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.515 [2024-12-14 03:18:12.577059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.515 [2024-12-14 03:18:12.577073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.515 [2024-12-14 03:18:12.577079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.515 [2024-12-14 03:18:12.577086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.515 [2024-12-14 03:18:12.577101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.515 qpair failed and we were unable to recover it. 00:36:57.515 [2024-12-14 03:18:12.587033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.515 [2024-12-14 03:18:12.587086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.515 [2024-12-14 03:18:12.587099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.515 [2024-12-14 03:18:12.587106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.515 [2024-12-14 03:18:12.587112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.515 [2024-12-14 03:18:12.587127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.515 qpair failed and we were unable to recover it. 00:36:57.515 [2024-12-14 03:18:12.597105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.515 [2024-12-14 03:18:12.597162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.515 [2024-12-14 03:18:12.597176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.516 [2024-12-14 03:18:12.597182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.516 [2024-12-14 03:18:12.597189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.516 [2024-12-14 03:18:12.597204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.516 qpair failed and we were unable to recover it. 00:36:57.516 [2024-12-14 03:18:12.607107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.516 [2024-12-14 03:18:12.607160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.516 [2024-12-14 03:18:12.607173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.516 [2024-12-14 03:18:12.607183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.516 [2024-12-14 03:18:12.607189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.516 [2024-12-14 03:18:12.607204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.516 qpair failed and we were unable to recover it. 00:36:57.516 [2024-12-14 03:18:12.617152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.516 [2024-12-14 03:18:12.617222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.516 [2024-12-14 03:18:12.617236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.516 [2024-12-14 03:18:12.617242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.516 [2024-12-14 03:18:12.617248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.516 [2024-12-14 03:18:12.617264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.516 qpair failed and we were unable to recover it. 00:36:57.516 [2024-12-14 03:18:12.627165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.516 [2024-12-14 03:18:12.627220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.516 [2024-12-14 03:18:12.627233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.516 [2024-12-14 03:18:12.627241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.516 [2024-12-14 03:18:12.627248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.516 [2024-12-14 03:18:12.627263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.516 qpair failed and we were unable to recover it. 00:36:57.516 [2024-12-14 03:18:12.637238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.516 [2024-12-14 03:18:12.637297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.516 [2024-12-14 03:18:12.637309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.516 [2024-12-14 03:18:12.637320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.516 [2024-12-14 03:18:12.637326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.516 [2024-12-14 03:18:12.637341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.516 qpair failed and we were unable to recover it. 00:36:57.776 [2024-12-14 03:18:12.647196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.776 [2024-12-14 03:18:12.647257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.776 [2024-12-14 03:18:12.647270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.776 [2024-12-14 03:18:12.647277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.776 [2024-12-14 03:18:12.647283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.776 [2024-12-14 03:18:12.647301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.776 qpair failed and we were unable to recover it. 00:36:57.776 [2024-12-14 03:18:12.657265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.776 [2024-12-14 03:18:12.657323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.776 [2024-12-14 03:18:12.657336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.776 [2024-12-14 03:18:12.657342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.776 [2024-12-14 03:18:12.657349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.776 [2024-12-14 03:18:12.657364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.776 qpair failed and we were unable to recover it. 00:36:57.776 [2024-12-14 03:18:12.667275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.776 [2024-12-14 03:18:12.667340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.776 [2024-12-14 03:18:12.667352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.776 [2024-12-14 03:18:12.667359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.776 [2024-12-14 03:18:12.667365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.776 [2024-12-14 03:18:12.667380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.776 qpair failed and we were unable to recover it. 00:36:57.776 [2024-12-14 03:18:12.677309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.776 [2024-12-14 03:18:12.677381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.776 [2024-12-14 03:18:12.677394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.776 [2024-12-14 03:18:12.677401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.776 [2024-12-14 03:18:12.677407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.776 [2024-12-14 03:18:12.677423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.776 qpair failed and we were unable to recover it. 00:36:57.776 [2024-12-14 03:18:12.687388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.776 [2024-12-14 03:18:12.687447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.776 [2024-12-14 03:18:12.687460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.776 [2024-12-14 03:18:12.687467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.776 [2024-12-14 03:18:12.687473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.776 [2024-12-14 03:18:12.687488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.776 qpair failed and we were unable to recover it. 00:36:57.776 [2024-12-14 03:18:12.697372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.776 [2024-12-14 03:18:12.697427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.776 [2024-12-14 03:18:12.697440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.776 [2024-12-14 03:18:12.697447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.776 [2024-12-14 03:18:12.697453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.776 [2024-12-14 03:18:12.697468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.776 qpair failed and we were unable to recover it. 00:36:57.776 [2024-12-14 03:18:12.707425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.776 [2024-12-14 03:18:12.707508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.777 [2024-12-14 03:18:12.707521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.777 [2024-12-14 03:18:12.707528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.777 [2024-12-14 03:18:12.707534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.777 [2024-12-14 03:18:12.707549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.777 qpair failed and we were unable to recover it. 00:36:57.777 [2024-12-14 03:18:12.717391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.777 [2024-12-14 03:18:12.717449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.777 [2024-12-14 03:18:12.717468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.777 [2024-12-14 03:18:12.717476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.777 [2024-12-14 03:18:12.717483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.777 [2024-12-14 03:18:12.717503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.777 qpair failed and we were unable to recover it. 00:36:57.777 [2024-12-14 03:18:12.727392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.777 [2024-12-14 03:18:12.727448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.777 [2024-12-14 03:18:12.727461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.777 [2024-12-14 03:18:12.727468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.777 [2024-12-14 03:18:12.727475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.777 [2024-12-14 03:18:12.727491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.777 qpair failed and we were unable to recover it. 00:36:57.777 [2024-12-14 03:18:12.737417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.777 [2024-12-14 03:18:12.737470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.777 [2024-12-14 03:18:12.737483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.777 [2024-12-14 03:18:12.737494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.777 [2024-12-14 03:18:12.737501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.777 [2024-12-14 03:18:12.737516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.777 qpair failed and we were unable to recover it. 00:36:57.777 [2024-12-14 03:18:12.747489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.777 [2024-12-14 03:18:12.747568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.777 [2024-12-14 03:18:12.747581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.777 [2024-12-14 03:18:12.747587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.777 [2024-12-14 03:18:12.747594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.777 [2024-12-14 03:18:12.747609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.777 qpair failed and we were unable to recover it. 00:36:57.777 [2024-12-14 03:18:12.757535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.777 [2024-12-14 03:18:12.757616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.777 [2024-12-14 03:18:12.757630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.777 [2024-12-14 03:18:12.757637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.777 [2024-12-14 03:18:12.757643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.777 [2024-12-14 03:18:12.757659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.777 qpair failed and we were unable to recover it. 00:36:57.777 [2024-12-14 03:18:12.767561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.777 [2024-12-14 03:18:12.767628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.777 [2024-12-14 03:18:12.767640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.777 [2024-12-14 03:18:12.767648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.777 [2024-12-14 03:18:12.767654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.777 [2024-12-14 03:18:12.767669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.777 qpair failed and we were unable to recover it. 00:36:57.777 [2024-12-14 03:18:12.777557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.777 [2024-12-14 03:18:12.777609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.777 [2024-12-14 03:18:12.777622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.777 [2024-12-14 03:18:12.777628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.777 [2024-12-14 03:18:12.777634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.777 [2024-12-14 03:18:12.777653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.777 qpair failed and we were unable to recover it. 00:36:57.777 [2024-12-14 03:18:12.787655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.777 [2024-12-14 03:18:12.787709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.777 [2024-12-14 03:18:12.787723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.777 [2024-12-14 03:18:12.787729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.777 [2024-12-14 03:18:12.787735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.777 [2024-12-14 03:18:12.787750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.777 qpair failed and we were unable to recover it. 00:36:57.777 [2024-12-14 03:18:12.797637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.777 [2024-12-14 03:18:12.797709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.777 [2024-12-14 03:18:12.797722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.777 [2024-12-14 03:18:12.797729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.777 [2024-12-14 03:18:12.797735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.777 [2024-12-14 03:18:12.797751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.777 qpair failed and we were unable to recover it. 00:36:57.777 [2024-12-14 03:18:12.807662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.777 [2024-12-14 03:18:12.807716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.777 [2024-12-14 03:18:12.807729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.777 [2024-12-14 03:18:12.807736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.777 [2024-12-14 03:18:12.807742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.777 [2024-12-14 03:18:12.807758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.777 qpair failed and we were unable to recover it. 00:36:57.777 [2024-12-14 03:18:12.817685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.777 [2024-12-14 03:18:12.817739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.777 [2024-12-14 03:18:12.817752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.777 [2024-12-14 03:18:12.817759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.777 [2024-12-14 03:18:12.817765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.777 [2024-12-14 03:18:12.817780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.777 qpair failed and we were unable to recover it. 00:36:57.777 [2024-12-14 03:18:12.827730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.777 [2024-12-14 03:18:12.827781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.777 [2024-12-14 03:18:12.827794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.777 [2024-12-14 03:18:12.827801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.777 [2024-12-14 03:18:12.827807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.777 [2024-12-14 03:18:12.827822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.777 qpair failed and we were unable to recover it. 00:36:57.777 [2024-12-14 03:18:12.837710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.777 [2024-12-14 03:18:12.837807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.778 [2024-12-14 03:18:12.837821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.778 [2024-12-14 03:18:12.837828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.778 [2024-12-14 03:18:12.837834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.778 [2024-12-14 03:18:12.837849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.778 qpair failed and we were unable to recover it. 00:36:57.778 [2024-12-14 03:18:12.847772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.778 [2024-12-14 03:18:12.847831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.778 [2024-12-14 03:18:12.847845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.778 [2024-12-14 03:18:12.847852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.778 [2024-12-14 03:18:12.847857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.778 [2024-12-14 03:18:12.847873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.778 qpair failed and we were unable to recover it. 00:36:57.778 [2024-12-14 03:18:12.857806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.778 [2024-12-14 03:18:12.857862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.778 [2024-12-14 03:18:12.857876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.778 [2024-12-14 03:18:12.857883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.778 [2024-12-14 03:18:12.857890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.778 [2024-12-14 03:18:12.857905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.778 qpair failed and we were unable to recover it. 00:36:57.778 [2024-12-14 03:18:12.867828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.778 [2024-12-14 03:18:12.867877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.778 [2024-12-14 03:18:12.867893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.778 [2024-12-14 03:18:12.867899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.778 [2024-12-14 03:18:12.867906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.778 [2024-12-14 03:18:12.867920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.778 qpair failed and we were unable to recover it. 00:36:57.778 [2024-12-14 03:18:12.877803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.778 [2024-12-14 03:18:12.877859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.778 [2024-12-14 03:18:12.877872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.778 [2024-12-14 03:18:12.877879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.778 [2024-12-14 03:18:12.877885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.778 [2024-12-14 03:18:12.877900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.778 qpair failed and we were unable to recover it. 00:36:57.778 [2024-12-14 03:18:12.887887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.778 [2024-12-14 03:18:12.887955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.778 [2024-12-14 03:18:12.887969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.778 [2024-12-14 03:18:12.887976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.778 [2024-12-14 03:18:12.887982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.778 [2024-12-14 03:18:12.887998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.778 qpair failed and we were unable to recover it. 00:36:57.778 [2024-12-14 03:18:12.897931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.778 [2024-12-14 03:18:12.897987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.778 [2024-12-14 03:18:12.898001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.778 [2024-12-14 03:18:12.898008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.778 [2024-12-14 03:18:12.898015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:57.778 [2024-12-14 03:18:12.898030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:57.778 qpair failed and we were unable to recover it. 00:36:58.038 [2024-12-14 03:18:12.907976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.038 [2024-12-14 03:18:12.908037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.038 [2024-12-14 03:18:12.908050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.038 [2024-12-14 03:18:12.908057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.038 [2024-12-14 03:18:12.908068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.038 [2024-12-14 03:18:12.908082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.038 qpair failed and we were unable to recover it. 00:36:58.038 [2024-12-14 03:18:12.917979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.038 [2024-12-14 03:18:12.918067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.038 [2024-12-14 03:18:12.918081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.038 [2024-12-14 03:18:12.918088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.038 [2024-12-14 03:18:12.918094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.038 [2024-12-14 03:18:12.918109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.038 qpair failed and we were unable to recover it. 00:36:58.038 [2024-12-14 03:18:12.928007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.038 [2024-12-14 03:18:12.928064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.038 [2024-12-14 03:18:12.928077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.038 [2024-12-14 03:18:12.928085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.038 [2024-12-14 03:18:12.928091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.038 [2024-12-14 03:18:12.928106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.038 qpair failed and we were unable to recover it. 00:36:58.038 [2024-12-14 03:18:12.938055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.038 [2024-12-14 03:18:12.938133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.038 [2024-12-14 03:18:12.938146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.038 [2024-12-14 03:18:12.938154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.038 [2024-12-14 03:18:12.938160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.038 [2024-12-14 03:18:12.938175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.038 qpair failed and we were unable to recover it. 00:36:58.038 [2024-12-14 03:18:12.948075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.038 [2024-12-14 03:18:12.948133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.038 [2024-12-14 03:18:12.948146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.038 [2024-12-14 03:18:12.948152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.038 [2024-12-14 03:18:12.948159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.038 [2024-12-14 03:18:12.948173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.038 qpair failed and we were unable to recover it. 00:36:58.038 [2024-12-14 03:18:12.958100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.039 [2024-12-14 03:18:12.958153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.039 [2024-12-14 03:18:12.958166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.039 [2024-12-14 03:18:12.958173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.039 [2024-12-14 03:18:12.958180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.039 [2024-12-14 03:18:12.958195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.039 qpair failed and we were unable to recover it. 00:36:58.039 [2024-12-14 03:18:12.968122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.039 [2024-12-14 03:18:12.968174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.039 [2024-12-14 03:18:12.968187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.039 [2024-12-14 03:18:12.968194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.039 [2024-12-14 03:18:12.968200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.039 [2024-12-14 03:18:12.968215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.039 qpair failed and we were unable to recover it. 00:36:58.039 [2024-12-14 03:18:12.978168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.039 [2024-12-14 03:18:12.978217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.039 [2024-12-14 03:18:12.978230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.039 [2024-12-14 03:18:12.978237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.039 [2024-12-14 03:18:12.978243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.039 [2024-12-14 03:18:12.978259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.039 qpair failed and we were unable to recover it. 00:36:58.039 [2024-12-14 03:18:12.988176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.039 [2024-12-14 03:18:12.988233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.039 [2024-12-14 03:18:12.988246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.039 [2024-12-14 03:18:12.988253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.039 [2024-12-14 03:18:12.988259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.039 [2024-12-14 03:18:12.988274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.039 qpair failed and we were unable to recover it. 00:36:58.039 [2024-12-14 03:18:12.998142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.039 [2024-12-14 03:18:12.998240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.039 [2024-12-14 03:18:12.998258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.039 [2024-12-14 03:18:12.998265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.039 [2024-12-14 03:18:12.998271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.039 [2024-12-14 03:18:12.998287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.039 qpair failed and we were unable to recover it. 00:36:58.039 [2024-12-14 03:18:13.008258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.039 [2024-12-14 03:18:13.008319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.039 [2024-12-14 03:18:13.008333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.039 [2024-12-14 03:18:13.008339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.039 [2024-12-14 03:18:13.008346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.039 [2024-12-14 03:18:13.008361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.039 qpair failed and we were unable to recover it. 00:36:58.039 [2024-12-14 03:18:13.018281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.039 [2024-12-14 03:18:13.018338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.039 [2024-12-14 03:18:13.018352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.039 [2024-12-14 03:18:13.018359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.039 [2024-12-14 03:18:13.018365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.039 [2024-12-14 03:18:13.018380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.039 qpair failed and we were unable to recover it. 00:36:58.039 [2024-12-14 03:18:13.028330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.039 [2024-12-14 03:18:13.028388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.039 [2024-12-14 03:18:13.028401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.039 [2024-12-14 03:18:13.028408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.039 [2024-12-14 03:18:13.028414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.039 [2024-12-14 03:18:13.028429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.039 qpair failed and we were unable to recover it. 00:36:58.039 [2024-12-14 03:18:13.038359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.039 [2024-12-14 03:18:13.038412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.039 [2024-12-14 03:18:13.038425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.039 [2024-12-14 03:18:13.038432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.039 [2024-12-14 03:18:13.038441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.039 [2024-12-14 03:18:13.038457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.039 qpair failed and we were unable to recover it. 00:36:58.039 [2024-12-14 03:18:13.048369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.039 [2024-12-14 03:18:13.048421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.039 [2024-12-14 03:18:13.048434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.039 [2024-12-14 03:18:13.048441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.039 [2024-12-14 03:18:13.048447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.039 [2024-12-14 03:18:13.048462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.039 qpair failed and we were unable to recover it. 00:36:58.039 [2024-12-14 03:18:13.058376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.039 [2024-12-14 03:18:13.058439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.039 [2024-12-14 03:18:13.058453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.039 [2024-12-14 03:18:13.058460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.039 [2024-12-14 03:18:13.058466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.039 [2024-12-14 03:18:13.058482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.039 qpair failed and we were unable to recover it. 00:36:58.039 [2024-12-14 03:18:13.068408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.039 [2024-12-14 03:18:13.068460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.039 [2024-12-14 03:18:13.068473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.039 [2024-12-14 03:18:13.068480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.039 [2024-12-14 03:18:13.068486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.039 [2024-12-14 03:18:13.068501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.039 qpair failed and we were unable to recover it. 00:36:58.039 [2024-12-14 03:18:13.078420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.039 [2024-12-14 03:18:13.078531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.039 [2024-12-14 03:18:13.078544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.039 [2024-12-14 03:18:13.078551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.039 [2024-12-14 03:18:13.078557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.039 [2024-12-14 03:18:13.078573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.039 qpair failed and we were unable to recover it. 00:36:58.039 [2024-12-14 03:18:13.088477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.039 [2024-12-14 03:18:13.088531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.039 [2024-12-14 03:18:13.088544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.040 [2024-12-14 03:18:13.088551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.040 [2024-12-14 03:18:13.088557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.040 [2024-12-14 03:18:13.088573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.040 qpair failed and we were unable to recover it. 00:36:58.040 [2024-12-14 03:18:13.098555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.040 [2024-12-14 03:18:13.098640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.040 [2024-12-14 03:18:13.098653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.040 [2024-12-14 03:18:13.098660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.040 [2024-12-14 03:18:13.098666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.040 [2024-12-14 03:18:13.098681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.040 qpair failed and we were unable to recover it. 00:36:58.040 [2024-12-14 03:18:13.108537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.040 [2024-12-14 03:18:13.108595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.040 [2024-12-14 03:18:13.108608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.040 [2024-12-14 03:18:13.108615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.040 [2024-12-14 03:18:13.108621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.040 [2024-12-14 03:18:13.108636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.040 qpair failed and we were unable to recover it. 00:36:58.040 [2024-12-14 03:18:13.118562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.040 [2024-12-14 03:18:13.118619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.040 [2024-12-14 03:18:13.118632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.040 [2024-12-14 03:18:13.118639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.040 [2024-12-14 03:18:13.118645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.040 [2024-12-14 03:18:13.118660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.040 qpair failed and we were unable to recover it. 00:36:58.040 [2024-12-14 03:18:13.128620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.040 [2024-12-14 03:18:13.128725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.040 [2024-12-14 03:18:13.128738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.040 [2024-12-14 03:18:13.128745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.040 [2024-12-14 03:18:13.128751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.040 [2024-12-14 03:18:13.128765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.040 qpair failed and we were unable to recover it. 00:36:58.040 [2024-12-14 03:18:13.138547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.040 [2024-12-14 03:18:13.138597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.040 [2024-12-14 03:18:13.138610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.040 [2024-12-14 03:18:13.138617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.040 [2024-12-14 03:18:13.138624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.040 [2024-12-14 03:18:13.138639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.040 qpair failed and we were unable to recover it. 00:36:58.040 [2024-12-14 03:18:13.148659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.040 [2024-12-14 03:18:13.148738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.040 [2024-12-14 03:18:13.148751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.040 [2024-12-14 03:18:13.148757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.040 [2024-12-14 03:18:13.148764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.040 [2024-12-14 03:18:13.148778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.040 qpair failed and we were unable to recover it. 00:36:58.040 [2024-12-14 03:18:13.158707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.040 [2024-12-14 03:18:13.158786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.040 [2024-12-14 03:18:13.158799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.040 [2024-12-14 03:18:13.158806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.040 [2024-12-14 03:18:13.158812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.040 [2024-12-14 03:18:13.158828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.040 qpair failed and we were unable to recover it. 00:36:58.040 [2024-12-14 03:18:13.168705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.040 [2024-12-14 03:18:13.168760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.040 [2024-12-14 03:18:13.168773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.040 [2024-12-14 03:18:13.168782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.040 [2024-12-14 03:18:13.168789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.040 [2024-12-14 03:18:13.168804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.040 qpair failed and we were unable to recover it. 00:36:58.300 [2024-12-14 03:18:13.178647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.300 [2024-12-14 03:18:13.178707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.300 [2024-12-14 03:18:13.178720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.300 [2024-12-14 03:18:13.178727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.300 [2024-12-14 03:18:13.178733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.300 [2024-12-14 03:18:13.178748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.300 qpair failed and we were unable to recover it. 00:36:58.300 [2024-12-14 03:18:13.188750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.300 [2024-12-14 03:18:13.188802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.300 [2024-12-14 03:18:13.188814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.300 [2024-12-14 03:18:13.188821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.300 [2024-12-14 03:18:13.188827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.300 [2024-12-14 03:18:13.188842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.300 qpair failed and we were unable to recover it. 00:36:58.300 [2024-12-14 03:18:13.198778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.300 [2024-12-14 03:18:13.198832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.300 [2024-12-14 03:18:13.198844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.300 [2024-12-14 03:18:13.198851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.300 [2024-12-14 03:18:13.198858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.300 [2024-12-14 03:18:13.198873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.300 qpair failed and we were unable to recover it. 00:36:58.300 [2024-12-14 03:18:13.208802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.300 [2024-12-14 03:18:13.208855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.300 [2024-12-14 03:18:13.208868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.300 [2024-12-14 03:18:13.208874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.300 [2024-12-14 03:18:13.208881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.300 [2024-12-14 03:18:13.208899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.300 qpair failed and we were unable to recover it. 00:36:58.300 [2024-12-14 03:18:13.218755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.300 [2024-12-14 03:18:13.218823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.300 [2024-12-14 03:18:13.218836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.300 [2024-12-14 03:18:13.218843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.300 [2024-12-14 03:18:13.218849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.300 [2024-12-14 03:18:13.218863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.300 qpair failed and we were unable to recover it. 00:36:58.300 [2024-12-14 03:18:13.228840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.300 [2024-12-14 03:18:13.228893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.300 [2024-12-14 03:18:13.228906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.300 [2024-12-14 03:18:13.228913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.300 [2024-12-14 03:18:13.228919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.301 [2024-12-14 03:18:13.228934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.301 qpair failed and we were unable to recover it. 00:36:58.301 [2024-12-14 03:18:13.238883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.301 [2024-12-14 03:18:13.238937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.301 [2024-12-14 03:18:13.238950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.301 [2024-12-14 03:18:13.238957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.301 [2024-12-14 03:18:13.238964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.301 [2024-12-14 03:18:13.238979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.301 qpair failed and we were unable to recover it. 00:36:58.301 [2024-12-14 03:18:13.248934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.301 [2024-12-14 03:18:13.248990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.301 [2024-12-14 03:18:13.249004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.301 [2024-12-14 03:18:13.249011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.301 [2024-12-14 03:18:13.249017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.301 [2024-12-14 03:18:13.249033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.301 qpair failed and we were unable to recover it. 00:36:58.301 [2024-12-14 03:18:13.258920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.301 [2024-12-14 03:18:13.258993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.301 [2024-12-14 03:18:13.259008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.301 [2024-12-14 03:18:13.259015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.301 [2024-12-14 03:18:13.259022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.301 [2024-12-14 03:18:13.259038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.301 qpair failed and we were unable to recover it. 00:36:58.301 [2024-12-14 03:18:13.269015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.301 [2024-12-14 03:18:13.269073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.301 [2024-12-14 03:18:13.269087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.301 [2024-12-14 03:18:13.269095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.301 [2024-12-14 03:18:13.269102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.301 [2024-12-14 03:18:13.269118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.301 qpair failed and we were unable to recover it. 00:36:58.301 [2024-12-14 03:18:13.279015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.301 [2024-12-14 03:18:13.279073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.301 [2024-12-14 03:18:13.279086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.301 [2024-12-14 03:18:13.279093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.301 [2024-12-14 03:18:13.279100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.301 [2024-12-14 03:18:13.279116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.301 qpair failed and we were unable to recover it. 00:36:58.301 [2024-12-14 03:18:13.289046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.301 [2024-12-14 03:18:13.289101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.301 [2024-12-14 03:18:13.289114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.301 [2024-12-14 03:18:13.289121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.301 [2024-12-14 03:18:13.289128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.301 [2024-12-14 03:18:13.289142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.301 qpair failed and we were unable to recover it. 00:36:58.301 [2024-12-14 03:18:13.299066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.301 [2024-12-14 03:18:13.299157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.301 [2024-12-14 03:18:13.299170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.301 [2024-12-14 03:18:13.299180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.301 [2024-12-14 03:18:13.299186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.301 [2024-12-14 03:18:13.299201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.301 qpair failed and we were unable to recover it. 00:36:58.301 [2024-12-14 03:18:13.309127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.301 [2024-12-14 03:18:13.309183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.301 [2024-12-14 03:18:13.309196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.301 [2024-12-14 03:18:13.309202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.301 [2024-12-14 03:18:13.309209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.301 [2024-12-14 03:18:13.309224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.301 qpair failed and we were unable to recover it. 00:36:58.301 [2024-12-14 03:18:13.319103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.301 [2024-12-14 03:18:13.319156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.301 [2024-12-14 03:18:13.319169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.301 [2024-12-14 03:18:13.319175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.301 [2024-12-14 03:18:13.319182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.301 [2024-12-14 03:18:13.319197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.301 qpair failed and we were unable to recover it. 00:36:58.301 [2024-12-14 03:18:13.329142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.301 [2024-12-14 03:18:13.329191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.301 [2024-12-14 03:18:13.329204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.301 [2024-12-14 03:18:13.329211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.301 [2024-12-14 03:18:13.329217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.301 [2024-12-14 03:18:13.329233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.301 qpair failed and we were unable to recover it. 00:36:58.301 [2024-12-14 03:18:13.339169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.301 [2024-12-14 03:18:13.339221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.301 [2024-12-14 03:18:13.339234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.301 [2024-12-14 03:18:13.339241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.301 [2024-12-14 03:18:13.339247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.301 [2024-12-14 03:18:13.339265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.301 qpair failed and we were unable to recover it. 00:36:58.301 [2024-12-14 03:18:13.349184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.301 [2024-12-14 03:18:13.349238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.301 [2024-12-14 03:18:13.349251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.301 [2024-12-14 03:18:13.349257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.301 [2024-12-14 03:18:13.349264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.301 [2024-12-14 03:18:13.349280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.301 qpair failed and we were unable to recover it. 00:36:58.301 [2024-12-14 03:18:13.359223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.301 [2024-12-14 03:18:13.359294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.301 [2024-12-14 03:18:13.359308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.301 [2024-12-14 03:18:13.359318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.301 [2024-12-14 03:18:13.359325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.301 [2024-12-14 03:18:13.359340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.301 qpair failed and we were unable to recover it. 00:36:58.301 [2024-12-14 03:18:13.369252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.301 [2024-12-14 03:18:13.369308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.302 [2024-12-14 03:18:13.369323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.302 [2024-12-14 03:18:13.369330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.302 [2024-12-14 03:18:13.369336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.302 [2024-12-14 03:18:13.369351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.302 qpair failed and we were unable to recover it. 00:36:58.302 [2024-12-14 03:18:13.379332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.302 [2024-12-14 03:18:13.379389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.302 [2024-12-14 03:18:13.379402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.302 [2024-12-14 03:18:13.379409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.302 [2024-12-14 03:18:13.379415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.302 [2024-12-14 03:18:13.379431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.302 qpair failed and we were unable to recover it. 00:36:58.302 [2024-12-14 03:18:13.389322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.302 [2024-12-14 03:18:13.389393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.302 [2024-12-14 03:18:13.389407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.302 [2024-12-14 03:18:13.389414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.302 [2024-12-14 03:18:13.389420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.302 [2024-12-14 03:18:13.389436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.302 qpair failed and we were unable to recover it. 00:36:58.302 [2024-12-14 03:18:13.399361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.302 [2024-12-14 03:18:13.399426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.302 [2024-12-14 03:18:13.399439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.302 [2024-12-14 03:18:13.399446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.302 [2024-12-14 03:18:13.399452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.302 [2024-12-14 03:18:13.399467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.302 qpair failed and we were unable to recover it. 00:36:58.302 [2024-12-14 03:18:13.409368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.302 [2024-12-14 03:18:13.409425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.302 [2024-12-14 03:18:13.409438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.302 [2024-12-14 03:18:13.409445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.302 [2024-12-14 03:18:13.409451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.302 [2024-12-14 03:18:13.409467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.302 qpair failed and we were unable to recover it. 00:36:58.302 [2024-12-14 03:18:13.419439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.302 [2024-12-14 03:18:13.419504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.302 [2024-12-14 03:18:13.419517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.302 [2024-12-14 03:18:13.419524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.302 [2024-12-14 03:18:13.419530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.302 [2024-12-14 03:18:13.419545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.302 qpair failed and we were unable to recover it. 00:36:58.302 [2024-12-14 03:18:13.429417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.302 [2024-12-14 03:18:13.429469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.302 [2024-12-14 03:18:13.429484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.302 [2024-12-14 03:18:13.429492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.302 [2024-12-14 03:18:13.429498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.302 [2024-12-14 03:18:13.429513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.302 qpair failed and we were unable to recover it. 00:36:58.562 [2024-12-14 03:18:13.439640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.562 [2024-12-14 03:18:13.439701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.562 [2024-12-14 03:18:13.439714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.562 [2024-12-14 03:18:13.439721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.562 [2024-12-14 03:18:13.439728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.562 [2024-12-14 03:18:13.439743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.562 qpair failed and we were unable to recover it. 00:36:58.562 [2024-12-14 03:18:13.449511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.562 [2024-12-14 03:18:13.449572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.562 [2024-12-14 03:18:13.449584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.562 [2024-12-14 03:18:13.449592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.562 [2024-12-14 03:18:13.449598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.562 [2024-12-14 03:18:13.449613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.562 qpair failed and we were unable to recover it. 00:36:58.562 [2024-12-14 03:18:13.459522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.562 [2024-12-14 03:18:13.459575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.562 [2024-12-14 03:18:13.459588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.562 [2024-12-14 03:18:13.459595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.562 [2024-12-14 03:18:13.459601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.562 [2024-12-14 03:18:13.459616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.562 qpair failed and we were unable to recover it. 00:36:58.562 [2024-12-14 03:18:13.469590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.562 [2024-12-14 03:18:13.469647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.562 [2024-12-14 03:18:13.469660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.562 [2024-12-14 03:18:13.469667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.562 [2024-12-14 03:18:13.469676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.562 [2024-12-14 03:18:13.469691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.562 qpair failed and we were unable to recover it. 00:36:58.562 [2024-12-14 03:18:13.479593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.562 [2024-12-14 03:18:13.479649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.562 [2024-12-14 03:18:13.479662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.562 [2024-12-14 03:18:13.479669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.562 [2024-12-14 03:18:13.479675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.562 [2024-12-14 03:18:13.479690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.562 qpair failed and we were unable to recover it. 00:36:58.562 [2024-12-14 03:18:13.489607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.562 [2024-12-14 03:18:13.489661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.562 [2024-12-14 03:18:13.489674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.562 [2024-12-14 03:18:13.489681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.562 [2024-12-14 03:18:13.489688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.562 [2024-12-14 03:18:13.489702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.562 qpair failed and we were unable to recover it. 00:36:58.562 [2024-12-14 03:18:13.499688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.562 [2024-12-14 03:18:13.499754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.562 [2024-12-14 03:18:13.499766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.562 [2024-12-14 03:18:13.499773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.562 [2024-12-14 03:18:13.499779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.562 [2024-12-14 03:18:13.499795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.562 qpair failed and we were unable to recover it. 00:36:58.562 [2024-12-14 03:18:13.509656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.562 [2024-12-14 03:18:13.509729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.562 [2024-12-14 03:18:13.509744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.562 [2024-12-14 03:18:13.509751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.562 [2024-12-14 03:18:13.509757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.562 [2024-12-14 03:18:13.509773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.562 qpair failed and we were unable to recover it. 00:36:58.563 [2024-12-14 03:18:13.519688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.563 [2024-12-14 03:18:13.519746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.563 [2024-12-14 03:18:13.519760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.563 [2024-12-14 03:18:13.519766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.563 [2024-12-14 03:18:13.519773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.563 [2024-12-14 03:18:13.519788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.563 qpair failed and we were unable to recover it. 00:36:58.563 [2024-12-14 03:18:13.529736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.563 [2024-12-14 03:18:13.529805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.563 [2024-12-14 03:18:13.529818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.563 [2024-12-14 03:18:13.529825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.563 [2024-12-14 03:18:13.529831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.563 [2024-12-14 03:18:13.529846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.563 qpair failed and we were unable to recover it. 00:36:58.563 [2024-12-14 03:18:13.539687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.563 [2024-12-14 03:18:13.539749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.563 [2024-12-14 03:18:13.539762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.563 [2024-12-14 03:18:13.539770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.563 [2024-12-14 03:18:13.539776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.563 [2024-12-14 03:18:13.539791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.563 qpair failed and we were unable to recover it. 00:36:58.563 [2024-12-14 03:18:13.549752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.563 [2024-12-14 03:18:13.549807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.563 [2024-12-14 03:18:13.549820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.563 [2024-12-14 03:18:13.549827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.563 [2024-12-14 03:18:13.549833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.563 [2024-12-14 03:18:13.549849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.563 qpair failed and we were unable to recover it. 00:36:58.563 [2024-12-14 03:18:13.559794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.563 [2024-12-14 03:18:13.559846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.563 [2024-12-14 03:18:13.559862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.563 [2024-12-14 03:18:13.559869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.563 [2024-12-14 03:18:13.559875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.563 [2024-12-14 03:18:13.559891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.563 qpair failed and we were unable to recover it. 00:36:58.563 [2024-12-14 03:18:13.569843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.563 [2024-12-14 03:18:13.569899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.563 [2024-12-14 03:18:13.569912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.563 [2024-12-14 03:18:13.569919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.563 [2024-12-14 03:18:13.569926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.563 [2024-12-14 03:18:13.569940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.563 qpair failed and we were unable to recover it. 00:36:58.563 [2024-12-14 03:18:13.579849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.563 [2024-12-14 03:18:13.579930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.563 [2024-12-14 03:18:13.579944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.563 [2024-12-14 03:18:13.579951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.563 [2024-12-14 03:18:13.579957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.563 [2024-12-14 03:18:13.579972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.563 qpair failed and we were unable to recover it. 00:36:58.563 [2024-12-14 03:18:13.589879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.563 [2024-12-14 03:18:13.589931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.563 [2024-12-14 03:18:13.589944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.563 [2024-12-14 03:18:13.589950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.563 [2024-12-14 03:18:13.589957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.563 [2024-12-14 03:18:13.589972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.563 qpair failed and we were unable to recover it. 00:36:58.563 [2024-12-14 03:18:13.599846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.563 [2024-12-14 03:18:13.599915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.563 [2024-12-14 03:18:13.599928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.563 [2024-12-14 03:18:13.599935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.563 [2024-12-14 03:18:13.599944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.563 [2024-12-14 03:18:13.599960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.563 qpair failed and we were unable to recover it. 00:36:58.563 [2024-12-14 03:18:13.609945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.563 [2024-12-14 03:18:13.610000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.563 [2024-12-14 03:18:13.610013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.563 [2024-12-14 03:18:13.610020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.563 [2024-12-14 03:18:13.610026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.563 [2024-12-14 03:18:13.610041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.563 qpair failed and we were unable to recover it. 00:36:58.563 [2024-12-14 03:18:13.620025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.563 [2024-12-14 03:18:13.620083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.563 [2024-12-14 03:18:13.620095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.563 [2024-12-14 03:18:13.620102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.563 [2024-12-14 03:18:13.620108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.563 [2024-12-14 03:18:13.620124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.563 qpair failed and we were unable to recover it. 00:36:58.563 [2024-12-14 03:18:13.629993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.563 [2024-12-14 03:18:13.630047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.563 [2024-12-14 03:18:13.630059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.563 [2024-12-14 03:18:13.630066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.563 [2024-12-14 03:18:13.630073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.563 [2024-12-14 03:18:13.630088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.563 qpair failed and we were unable to recover it. 00:36:58.563 [2024-12-14 03:18:13.640061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.563 [2024-12-14 03:18:13.640116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.563 [2024-12-14 03:18:13.640129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.563 [2024-12-14 03:18:13.640136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.563 [2024-12-14 03:18:13.640142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.563 [2024-12-14 03:18:13.640157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.563 qpair failed and we were unable to recover it. 00:36:58.563 [2024-12-14 03:18:13.650037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.563 [2024-12-14 03:18:13.650096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.563 [2024-12-14 03:18:13.650109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.564 [2024-12-14 03:18:13.650116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.564 [2024-12-14 03:18:13.650122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.564 [2024-12-14 03:18:13.650137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.564 qpair failed and we were unable to recover it. 00:36:58.564 [2024-12-14 03:18:13.660062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.564 [2024-12-14 03:18:13.660117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.564 [2024-12-14 03:18:13.660129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.564 [2024-12-14 03:18:13.660137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.564 [2024-12-14 03:18:13.660143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.564 [2024-12-14 03:18:13.660158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.564 qpair failed and we were unable to recover it. 00:36:58.564 [2024-12-14 03:18:13.670097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.564 [2024-12-14 03:18:13.670150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.564 [2024-12-14 03:18:13.670163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.564 [2024-12-14 03:18:13.670170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.564 [2024-12-14 03:18:13.670176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.564 [2024-12-14 03:18:13.670191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.564 qpair failed and we were unable to recover it. 00:36:58.564 [2024-12-14 03:18:13.680129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.564 [2024-12-14 03:18:13.680227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.564 [2024-12-14 03:18:13.680240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.564 [2024-12-14 03:18:13.680246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.564 [2024-12-14 03:18:13.680252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.564 [2024-12-14 03:18:13.680268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.564 qpair failed and we were unable to recover it. 00:36:58.564 [2024-12-14 03:18:13.690165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.564 [2024-12-14 03:18:13.690224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.564 [2024-12-14 03:18:13.690237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.564 [2024-12-14 03:18:13.690243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.564 [2024-12-14 03:18:13.690250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.564 [2024-12-14 03:18:13.690264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.564 qpair failed and we were unable to recover it. 00:36:58.824 [2024-12-14 03:18:13.700190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.824 [2024-12-14 03:18:13.700241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.824 [2024-12-14 03:18:13.700254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.824 [2024-12-14 03:18:13.700261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.824 [2024-12-14 03:18:13.700268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.824 [2024-12-14 03:18:13.700283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.824 qpair failed and we were unable to recover it. 00:36:58.824 [2024-12-14 03:18:13.710216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.824 [2024-12-14 03:18:13.710269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.824 [2024-12-14 03:18:13.710282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.824 [2024-12-14 03:18:13.710289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.824 [2024-12-14 03:18:13.710295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.824 [2024-12-14 03:18:13.710310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.824 qpair failed and we were unable to recover it. 00:36:58.824 [2024-12-14 03:18:13.720255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.824 [2024-12-14 03:18:13.720323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.824 [2024-12-14 03:18:13.720336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.824 [2024-12-14 03:18:13.720343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.824 [2024-12-14 03:18:13.720349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.824 [2024-12-14 03:18:13.720364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.824 qpair failed and we were unable to recover it. 00:36:58.824 [2024-12-14 03:18:13.730267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.824 [2024-12-14 03:18:13.730327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.824 [2024-12-14 03:18:13.730340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.824 [2024-12-14 03:18:13.730351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.824 [2024-12-14 03:18:13.730357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.824 [2024-12-14 03:18:13.730372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.824 qpair failed and we were unable to recover it. 00:36:58.824 [2024-12-14 03:18:13.740301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.824 [2024-12-14 03:18:13.740359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.824 [2024-12-14 03:18:13.740372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.824 [2024-12-14 03:18:13.740382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.824 [2024-12-14 03:18:13.740389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.824 [2024-12-14 03:18:13.740406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.824 qpair failed and we were unable to recover it. 00:36:58.824 [2024-12-14 03:18:13.750267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.824 [2024-12-14 03:18:13.750326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.824 [2024-12-14 03:18:13.750340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.824 [2024-12-14 03:18:13.750347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.824 [2024-12-14 03:18:13.750353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.824 [2024-12-14 03:18:13.750368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.824 qpair failed and we were unable to recover it. 00:36:58.824 [2024-12-14 03:18:13.760367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.824 [2024-12-14 03:18:13.760442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.824 [2024-12-14 03:18:13.760457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.824 [2024-12-14 03:18:13.760464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.824 [2024-12-14 03:18:13.760470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.824 [2024-12-14 03:18:13.760486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.824 qpair failed and we were unable to recover it. 00:36:58.824 [2024-12-14 03:18:13.770388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.824 [2024-12-14 03:18:13.770444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.824 [2024-12-14 03:18:13.770457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.824 [2024-12-14 03:18:13.770464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.824 [2024-12-14 03:18:13.770470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.824 [2024-12-14 03:18:13.770489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.824 qpair failed and we were unable to recover it. 00:36:58.824 [2024-12-14 03:18:13.780416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.824 [2024-12-14 03:18:13.780470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.825 [2024-12-14 03:18:13.780483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.825 [2024-12-14 03:18:13.780490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.825 [2024-12-14 03:18:13.780496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.825 [2024-12-14 03:18:13.780511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-12-14 03:18:13.790456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.825 [2024-12-14 03:18:13.790508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.825 [2024-12-14 03:18:13.790521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.825 [2024-12-14 03:18:13.790528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.825 [2024-12-14 03:18:13.790534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.825 [2024-12-14 03:18:13.790549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-12-14 03:18:13.800480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.825 [2024-12-14 03:18:13.800537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.825 [2024-12-14 03:18:13.800550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.825 [2024-12-14 03:18:13.800557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.825 [2024-12-14 03:18:13.800563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.825 [2024-12-14 03:18:13.800578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-12-14 03:18:13.810523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.825 [2024-12-14 03:18:13.810576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.825 [2024-12-14 03:18:13.810588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.825 [2024-12-14 03:18:13.810596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.825 [2024-12-14 03:18:13.810602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.825 [2024-12-14 03:18:13.810618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-12-14 03:18:13.820522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.825 [2024-12-14 03:18:13.820574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.825 [2024-12-14 03:18:13.820586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.825 [2024-12-14 03:18:13.820593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.825 [2024-12-14 03:18:13.820599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.825 [2024-12-14 03:18:13.820614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-12-14 03:18:13.830554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.825 [2024-12-14 03:18:13.830622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.825 [2024-12-14 03:18:13.830636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.825 [2024-12-14 03:18:13.830642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.825 [2024-12-14 03:18:13.830648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.825 [2024-12-14 03:18:13.830664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-12-14 03:18:13.840561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.825 [2024-12-14 03:18:13.840617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.825 [2024-12-14 03:18:13.840629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.825 [2024-12-14 03:18:13.840636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.825 [2024-12-14 03:18:13.840643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.825 [2024-12-14 03:18:13.840658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-12-14 03:18:13.850637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.825 [2024-12-14 03:18:13.850691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.825 [2024-12-14 03:18:13.850703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.825 [2024-12-14 03:18:13.850710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.825 [2024-12-14 03:18:13.850717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.825 [2024-12-14 03:18:13.850732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-12-14 03:18:13.860669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.825 [2024-12-14 03:18:13.860725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.825 [2024-12-14 03:18:13.860740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.825 [2024-12-14 03:18:13.860747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.825 [2024-12-14 03:18:13.860754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.825 [2024-12-14 03:18:13.860769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-12-14 03:18:13.870652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.825 [2024-12-14 03:18:13.870731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.825 [2024-12-14 03:18:13.870743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.825 [2024-12-14 03:18:13.870750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.825 [2024-12-14 03:18:13.870756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.825 [2024-12-14 03:18:13.870770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-12-14 03:18:13.880694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.825 [2024-12-14 03:18:13.880748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.825 [2024-12-14 03:18:13.880760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.825 [2024-12-14 03:18:13.880767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.825 [2024-12-14 03:18:13.880774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.825 [2024-12-14 03:18:13.880789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-12-14 03:18:13.890753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.825 [2024-12-14 03:18:13.890807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.825 [2024-12-14 03:18:13.890820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.825 [2024-12-14 03:18:13.890826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.825 [2024-12-14 03:18:13.890833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.825 [2024-12-14 03:18:13.890849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-12-14 03:18:13.900749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.825 [2024-12-14 03:18:13.900801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.825 [2024-12-14 03:18:13.900813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.825 [2024-12-14 03:18:13.900820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.825 [2024-12-14 03:18:13.900826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.825 [2024-12-14 03:18:13.900844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.825 qpair failed and we were unable to recover it. 00:36:58.825 [2024-12-14 03:18:13.910769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.825 [2024-12-14 03:18:13.910821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.825 [2024-12-14 03:18:13.910833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.825 [2024-12-14 03:18:13.910840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.825 [2024-12-14 03:18:13.910846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.826 [2024-12-14 03:18:13.910861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-12-14 03:18:13.920840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.826 [2024-12-14 03:18:13.920892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.826 [2024-12-14 03:18:13.920905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.826 [2024-12-14 03:18:13.920911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.826 [2024-12-14 03:18:13.920918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.826 [2024-12-14 03:18:13.920933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-12-14 03:18:13.930825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.826 [2024-12-14 03:18:13.930886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.826 [2024-12-14 03:18:13.930899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.826 [2024-12-14 03:18:13.930906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.826 [2024-12-14 03:18:13.930912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.826 [2024-12-14 03:18:13.930927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-12-14 03:18:13.940852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.826 [2024-12-14 03:18:13.940901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.826 [2024-12-14 03:18:13.940914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.826 [2024-12-14 03:18:13.940921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.826 [2024-12-14 03:18:13.940928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.826 [2024-12-14 03:18:13.940944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.826 qpair failed and we were unable to recover it. 00:36:58.826 [2024-12-14 03:18:13.950883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.826 [2024-12-14 03:18:13.950938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.826 [2024-12-14 03:18:13.950951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.826 [2024-12-14 03:18:13.950957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.826 [2024-12-14 03:18:13.950964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:58.826 [2024-12-14 03:18:13.950979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:58.826 qpair failed and we were unable to recover it. 00:36:59.086 [2024-12-14 03:18:13.960926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.086 [2024-12-14 03:18:13.960984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.086 [2024-12-14 03:18:13.960996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.086 [2024-12-14 03:18:13.961003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.086 [2024-12-14 03:18:13.961010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.086 [2024-12-14 03:18:13.961026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.086 qpair failed and we were unable to recover it. 00:36:59.086 [2024-12-14 03:18:13.970869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.086 [2024-12-14 03:18:13.970932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.086 [2024-12-14 03:18:13.970944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.086 [2024-12-14 03:18:13.970952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.086 [2024-12-14 03:18:13.970957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.086 [2024-12-14 03:18:13.970972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.086 qpair failed and we were unable to recover it. 00:36:59.086 [2024-12-14 03:18:13.981036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.086 [2024-12-14 03:18:13.981098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.086 [2024-12-14 03:18:13.981110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.086 [2024-12-14 03:18:13.981117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.086 [2024-12-14 03:18:13.981123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.086 [2024-12-14 03:18:13.981139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.086 qpair failed and we were unable to recover it. 00:36:59.086 [2024-12-14 03:18:13.991003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.086 [2024-12-14 03:18:13.991056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.086 [2024-12-14 03:18:13.991072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.086 [2024-12-14 03:18:13.991079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.086 [2024-12-14 03:18:13.991085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.086 [2024-12-14 03:18:13.991100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.086 qpair failed and we were unable to recover it. 00:36:59.086 [2024-12-14 03:18:14.001047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.086 [2024-12-14 03:18:14.001104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.086 [2024-12-14 03:18:14.001117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.086 [2024-12-14 03:18:14.001124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.086 [2024-12-14 03:18:14.001130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.086 [2024-12-14 03:18:14.001146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.086 qpair failed and we were unable to recover it. 00:36:59.086 [2024-12-14 03:18:14.011077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.086 [2024-12-14 03:18:14.011134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.086 [2024-12-14 03:18:14.011149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.086 [2024-12-14 03:18:14.011156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.086 [2024-12-14 03:18:14.011163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.086 [2024-12-14 03:18:14.011178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.086 qpair failed and we were unable to recover it. 00:36:59.086 [2024-12-14 03:18:14.021106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.086 [2024-12-14 03:18:14.021155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.086 [2024-12-14 03:18:14.021169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.086 [2024-12-14 03:18:14.021175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.086 [2024-12-14 03:18:14.021182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.086 [2024-12-14 03:18:14.021198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.086 qpair failed and we were unable to recover it. 00:36:59.086 [2024-12-14 03:18:14.031132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.086 [2024-12-14 03:18:14.031207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.086 [2024-12-14 03:18:14.031221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.086 [2024-12-14 03:18:14.031227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.086 [2024-12-14 03:18:14.031237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.086 [2024-12-14 03:18:14.031251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.086 qpair failed and we were unable to recover it. 00:36:59.086 [2024-12-14 03:18:14.041159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.086 [2024-12-14 03:18:14.041218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.086 [2024-12-14 03:18:14.041231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.086 [2024-12-14 03:18:14.041238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.086 [2024-12-14 03:18:14.041245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.086 [2024-12-14 03:18:14.041260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.086 qpair failed and we were unable to recover it. 00:36:59.086 [2024-12-14 03:18:14.051207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.086 [2024-12-14 03:18:14.051264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.086 [2024-12-14 03:18:14.051276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.086 [2024-12-14 03:18:14.051283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.086 [2024-12-14 03:18:14.051290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.086 [2024-12-14 03:18:14.051304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.086 qpair failed and we were unable to recover it. 00:36:59.086 [2024-12-14 03:18:14.061215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.086 [2024-12-14 03:18:14.061267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.086 [2024-12-14 03:18:14.061281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.086 [2024-12-14 03:18:14.061288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.086 [2024-12-14 03:18:14.061294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.086 [2024-12-14 03:18:14.061310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.086 qpair failed and we were unable to recover it. 00:36:59.086 [2024-12-14 03:18:14.071237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.086 [2024-12-14 03:18:14.071293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.086 [2024-12-14 03:18:14.071306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.086 [2024-12-14 03:18:14.071317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.086 [2024-12-14 03:18:14.071323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.086 [2024-12-14 03:18:14.071340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.086 qpair failed and we were unable to recover it. 00:36:59.087 [2024-12-14 03:18:14.081277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.087 [2024-12-14 03:18:14.081358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.087 [2024-12-14 03:18:14.081372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.087 [2024-12-14 03:18:14.081379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.087 [2024-12-14 03:18:14.081385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.087 [2024-12-14 03:18:14.081400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.087 qpair failed and we were unable to recover it. 00:36:59.087 [2024-12-14 03:18:14.091304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.087 [2024-12-14 03:18:14.091362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.087 [2024-12-14 03:18:14.091375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.087 [2024-12-14 03:18:14.091382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.087 [2024-12-14 03:18:14.091388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.087 [2024-12-14 03:18:14.091403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.087 qpair failed and we were unable to recover it. 00:36:59.087 [2024-12-14 03:18:14.101353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.087 [2024-12-14 03:18:14.101410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.087 [2024-12-14 03:18:14.101423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.087 [2024-12-14 03:18:14.101430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.087 [2024-12-14 03:18:14.101437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.087 [2024-12-14 03:18:14.101453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.087 qpair failed and we were unable to recover it. 00:36:59.087 [2024-12-14 03:18:14.111357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.087 [2024-12-14 03:18:14.111415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.087 [2024-12-14 03:18:14.111428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.087 [2024-12-14 03:18:14.111434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.087 [2024-12-14 03:18:14.111441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.087 [2024-12-14 03:18:14.111456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.087 qpair failed and we were unable to recover it. 00:36:59.087 [2024-12-14 03:18:14.121394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.087 [2024-12-14 03:18:14.121452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.087 [2024-12-14 03:18:14.121468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.087 [2024-12-14 03:18:14.121474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.087 [2024-12-14 03:18:14.121481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.087 [2024-12-14 03:18:14.121496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.087 qpair failed and we were unable to recover it. 00:36:59.087 [2024-12-14 03:18:14.131444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.087 [2024-12-14 03:18:14.131503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.087 [2024-12-14 03:18:14.131516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.087 [2024-12-14 03:18:14.131523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.087 [2024-12-14 03:18:14.131530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.087 [2024-12-14 03:18:14.131544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.087 qpair failed and we were unable to recover it. 00:36:59.087 [2024-12-14 03:18:14.141448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.087 [2024-12-14 03:18:14.141522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.087 [2024-12-14 03:18:14.141535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.087 [2024-12-14 03:18:14.141543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.087 [2024-12-14 03:18:14.141550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.087 [2024-12-14 03:18:14.141565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.087 qpair failed and we were unable to recover it. 00:36:59.087 [2024-12-14 03:18:14.151481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.087 [2024-12-14 03:18:14.151562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.087 [2024-12-14 03:18:14.151575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.087 [2024-12-14 03:18:14.151582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.087 [2024-12-14 03:18:14.151588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.087 [2024-12-14 03:18:14.151602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.087 qpair failed and we were unable to recover it. 00:36:59.087 [2024-12-14 03:18:14.161536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.087 [2024-12-14 03:18:14.161600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.087 [2024-12-14 03:18:14.161613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.087 [2024-12-14 03:18:14.161623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.087 [2024-12-14 03:18:14.161629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.087 [2024-12-14 03:18:14.161645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.087 qpair failed and we were unable to recover it. 00:36:59.087 [2024-12-14 03:18:14.171531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.087 [2024-12-14 03:18:14.171587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.087 [2024-12-14 03:18:14.171599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.087 [2024-12-14 03:18:14.171606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.087 [2024-12-14 03:18:14.171613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.087 [2024-12-14 03:18:14.171628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.087 qpair failed and we were unable to recover it. 00:36:59.087 [2024-12-14 03:18:14.181603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.087 [2024-12-14 03:18:14.181658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.087 [2024-12-14 03:18:14.181672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.087 [2024-12-14 03:18:14.181678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.087 [2024-12-14 03:18:14.181685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.087 [2024-12-14 03:18:14.181700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.087 qpair failed and we were unable to recover it. 00:36:59.087 [2024-12-14 03:18:14.191576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.087 [2024-12-14 03:18:14.191629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.087 [2024-12-14 03:18:14.191642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.087 [2024-12-14 03:18:14.191649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.087 [2024-12-14 03:18:14.191655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.087 [2024-12-14 03:18:14.191671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.087 qpair failed and we were unable to recover it. 00:36:59.087 [2024-12-14 03:18:14.201666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.087 [2024-12-14 03:18:14.201725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.087 [2024-12-14 03:18:14.201737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.087 [2024-12-14 03:18:14.201744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.087 [2024-12-14 03:18:14.201751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.087 [2024-12-14 03:18:14.201766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.087 qpair failed and we were unable to recover it. 00:36:59.087 [2024-12-14 03:18:14.211647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.087 [2024-12-14 03:18:14.211744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.087 [2024-12-14 03:18:14.211757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.088 [2024-12-14 03:18:14.211763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.088 [2024-12-14 03:18:14.211769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.088 [2024-12-14 03:18:14.211783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.088 qpair failed and we were unable to recover it. 00:36:59.347 [2024-12-14 03:18:14.221660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.347 [2024-12-14 03:18:14.221755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.347 [2024-12-14 03:18:14.221768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.347 [2024-12-14 03:18:14.221775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.347 [2024-12-14 03:18:14.221781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.347 [2024-12-14 03:18:14.221796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.347 qpair failed and we were unable to recover it. 00:36:59.347 [2024-12-14 03:18:14.231687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.347 [2024-12-14 03:18:14.231743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.347 [2024-12-14 03:18:14.231756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.347 [2024-12-14 03:18:14.231764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.347 [2024-12-14 03:18:14.231770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.347 [2024-12-14 03:18:14.231785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.347 qpair failed and we were unable to recover it. 00:36:59.347 [2024-12-14 03:18:14.241664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.347 [2024-12-14 03:18:14.241723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.347 [2024-12-14 03:18:14.241736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.347 [2024-12-14 03:18:14.241743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.347 [2024-12-14 03:18:14.241750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.347 [2024-12-14 03:18:14.241765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.347 qpair failed and we were unable to recover it. 00:36:59.347 [2024-12-14 03:18:14.251770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.347 [2024-12-14 03:18:14.251868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.347 [2024-12-14 03:18:14.251882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.347 [2024-12-14 03:18:14.251889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.347 [2024-12-14 03:18:14.251895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.347 [2024-12-14 03:18:14.251911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.347 qpair failed and we were unable to recover it. 00:36:59.347 [2024-12-14 03:18:14.261791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.347 [2024-12-14 03:18:14.261891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.347 [2024-12-14 03:18:14.261905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.347 [2024-12-14 03:18:14.261912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.347 [2024-12-14 03:18:14.261918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.347 [2024-12-14 03:18:14.261933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.347 qpair failed and we were unable to recover it. 00:36:59.347 [2024-12-14 03:18:14.271821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.347 [2024-12-14 03:18:14.271874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.347 [2024-12-14 03:18:14.271887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.347 [2024-12-14 03:18:14.271894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.347 [2024-12-14 03:18:14.271900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.347 [2024-12-14 03:18:14.271916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.347 qpair failed and we were unable to recover it. 00:36:59.347 [2024-12-14 03:18:14.281844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.347 [2024-12-14 03:18:14.281902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.347 [2024-12-14 03:18:14.281914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.347 [2024-12-14 03:18:14.281921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.347 [2024-12-14 03:18:14.281928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.347 [2024-12-14 03:18:14.281943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.348 qpair failed and we were unable to recover it. 00:36:59.348 [2024-12-14 03:18:14.291874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.348 [2024-12-14 03:18:14.291931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.348 [2024-12-14 03:18:14.291944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.348 [2024-12-14 03:18:14.291953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.348 [2024-12-14 03:18:14.291960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.348 [2024-12-14 03:18:14.291974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.348 qpair failed and we were unable to recover it. 00:36:59.348 [2024-12-14 03:18:14.301892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.348 [2024-12-14 03:18:14.301947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.348 [2024-12-14 03:18:14.301960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.348 [2024-12-14 03:18:14.301966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.348 [2024-12-14 03:18:14.301973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.348 [2024-12-14 03:18:14.301988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.348 qpair failed and we were unable to recover it. 00:36:59.348 [2024-12-14 03:18:14.311943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.348 [2024-12-14 03:18:14.311994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.348 [2024-12-14 03:18:14.312006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.348 [2024-12-14 03:18:14.312013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.348 [2024-12-14 03:18:14.312019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.348 [2024-12-14 03:18:14.312035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.348 qpair failed and we were unable to recover it. 00:36:59.348 [2024-12-14 03:18:14.321945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.348 [2024-12-14 03:18:14.322017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.348 [2024-12-14 03:18:14.322031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.348 [2024-12-14 03:18:14.322038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.348 [2024-12-14 03:18:14.322044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.348 [2024-12-14 03:18:14.322060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.348 qpair failed and we were unable to recover it. 00:36:59.348 [2024-12-14 03:18:14.331978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.348 [2024-12-14 03:18:14.332034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.348 [2024-12-14 03:18:14.332048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.348 [2024-12-14 03:18:14.332055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.348 [2024-12-14 03:18:14.332061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.348 [2024-12-14 03:18:14.332080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.348 qpair failed and we were unable to recover it. 00:36:59.348 [2024-12-14 03:18:14.341938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.348 [2024-12-14 03:18:14.341986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.348 [2024-12-14 03:18:14.341999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.348 [2024-12-14 03:18:14.342007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.348 [2024-12-14 03:18:14.342014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.348 [2024-12-14 03:18:14.342030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.348 qpair failed and we were unable to recover it. 00:36:59.348 [2024-12-14 03:18:14.352029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.348 [2024-12-14 03:18:14.352107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.348 [2024-12-14 03:18:14.352120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.348 [2024-12-14 03:18:14.352127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.348 [2024-12-14 03:18:14.352133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.348 [2024-12-14 03:18:14.352148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.348 qpair failed and we were unable to recover it. 00:36:59.348 [2024-12-14 03:18:14.362080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.348 [2024-12-14 03:18:14.362135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.348 [2024-12-14 03:18:14.362148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.348 [2024-12-14 03:18:14.362155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.348 [2024-12-14 03:18:14.362162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.348 [2024-12-14 03:18:14.362177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.348 qpair failed and we were unable to recover it. 00:36:59.348 [2024-12-14 03:18:14.372103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.348 [2024-12-14 03:18:14.372156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.348 [2024-12-14 03:18:14.372168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.348 [2024-12-14 03:18:14.372175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.348 [2024-12-14 03:18:14.372182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3b4c000b90 00:36:59.348 [2024-12-14 03:18:14.372197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:59.348 qpair failed and we were unable to recover it. 00:36:59.348 [2024-12-14 03:18:14.372366] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:36:59.348 A controller has encountered a failure and is being reset. 00:36:59.348 Controller properly reset. 00:36:59.348 Initializing NVMe Controllers 00:36:59.348 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:59.348 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:59.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:59.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:59.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:59.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:59.348 Initialization complete. Launching workers. 00:36:59.348 Starting thread on core 1 00:36:59.348 Starting thread on core 2 00:36:59.348 Starting thread on core 3 00:36:59.348 Starting thread on core 0 00:36:59.348 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:59.348 00:36:59.348 real 0m10.829s 00:36:59.348 user 0m18.975s 00:36:59.348 sys 0m4.805s 00:36:59.348 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:59.348 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:59.348 ************************************ 00:36:59.348 END TEST nvmf_target_disconnect_tc2 00:36:59.348 ************************************ 00:36:59.607 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:59.607 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:59.607 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:59.607 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:59.607 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:59.607 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:59.607 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:59.607 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:59.607 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:59.607 rmmod nvme_tcp 00:36:59.607 rmmod nvme_fabrics 00:36:59.607 rmmod nvme_keyring 00:36:59.607 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:59.607 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:59.607 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:59.607 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 391786 ']' 00:36:59.608 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 391786 00:36:59.608 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 391786 ']' 00:36:59.608 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 391786 00:36:59.608 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:36:59.608 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:59.608 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 391786 00:36:59.608 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:36:59.608 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:36:59.608 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 391786' 00:36:59.608 killing process with pid 391786 00:36:59.608 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 391786 00:36:59.608 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 391786 00:36:59.867 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:59.867 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:59.867 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:59.867 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:59.867 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:36:59.867 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:59.867 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:36:59.867 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:59.867 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:59.867 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:59.867 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:59.867 03:18:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:01.785 03:18:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:01.785 00:37:01.785 real 0m19.514s 00:37:01.785 user 0m46.932s 00:37:01.785 sys 0m9.614s 00:37:01.785 03:18:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:01.785 03:18:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:01.785 ************************************ 00:37:01.785 END TEST nvmf_target_disconnect 00:37:01.785 ************************************ 00:37:01.785 03:18:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:02.044 00:37:02.044 real 7m20.959s 00:37:02.044 user 16m45.701s 00:37:02.044 sys 2m7.348s 00:37:02.044 03:18:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:02.044 03:18:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.044 ************************************ 00:37:02.044 END TEST nvmf_host 00:37:02.044 ************************************ 00:37:02.044 03:18:16 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:37:02.044 03:18:16 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:37:02.044 03:18:16 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:02.044 03:18:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:02.044 03:18:16 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:02.044 03:18:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:02.044 ************************************ 00:37:02.044 START TEST nvmf_target_core_interrupt_mode 00:37:02.044 ************************************ 00:37:02.044 03:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:02.044 * Looking for test storage... 00:37:02.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:02.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.044 --rc genhtml_branch_coverage=1 00:37:02.044 --rc genhtml_function_coverage=1 00:37:02.044 --rc genhtml_legend=1 00:37:02.044 --rc geninfo_all_blocks=1 00:37:02.044 --rc geninfo_unexecuted_blocks=1 00:37:02.044 00:37:02.044 ' 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:02.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.044 --rc genhtml_branch_coverage=1 00:37:02.044 --rc genhtml_function_coverage=1 00:37:02.044 --rc genhtml_legend=1 00:37:02.044 --rc geninfo_all_blocks=1 00:37:02.044 --rc geninfo_unexecuted_blocks=1 00:37:02.044 00:37:02.044 ' 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:02.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.044 --rc genhtml_branch_coverage=1 00:37:02.044 --rc genhtml_function_coverage=1 00:37:02.044 --rc genhtml_legend=1 00:37:02.044 --rc geninfo_all_blocks=1 00:37:02.044 --rc geninfo_unexecuted_blocks=1 00:37:02.044 00:37:02.044 ' 00:37:02.044 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:02.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.045 --rc genhtml_branch_coverage=1 00:37:02.045 --rc genhtml_function_coverage=1 00:37:02.045 --rc genhtml_legend=1 00:37:02.045 --rc geninfo_all_blocks=1 00:37:02.045 --rc geninfo_unexecuted_blocks=1 00:37:02.045 00:37:02.045 ' 00:37:02.045 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:37:02.303 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:37:02.303 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:02.304 ************************************ 00:37:02.304 START TEST nvmf_abort 00:37:02.304 ************************************ 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:02.304 * Looking for test storage... 00:37:02.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:02.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.304 --rc genhtml_branch_coverage=1 00:37:02.304 --rc genhtml_function_coverage=1 00:37:02.304 --rc genhtml_legend=1 00:37:02.304 --rc geninfo_all_blocks=1 00:37:02.304 --rc geninfo_unexecuted_blocks=1 00:37:02.304 00:37:02.304 ' 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:02.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.304 --rc genhtml_branch_coverage=1 00:37:02.304 --rc genhtml_function_coverage=1 00:37:02.304 --rc genhtml_legend=1 00:37:02.304 --rc geninfo_all_blocks=1 00:37:02.304 --rc geninfo_unexecuted_blocks=1 00:37:02.304 00:37:02.304 ' 00:37:02.304 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:02.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.304 --rc genhtml_branch_coverage=1 00:37:02.304 --rc genhtml_function_coverage=1 00:37:02.304 --rc genhtml_legend=1 00:37:02.304 --rc geninfo_all_blocks=1 00:37:02.304 --rc geninfo_unexecuted_blocks=1 00:37:02.304 00:37:02.304 ' 00:37:02.305 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:02.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.305 --rc genhtml_branch_coverage=1 00:37:02.305 --rc genhtml_function_coverage=1 00:37:02.305 --rc genhtml_legend=1 00:37:02.305 --rc geninfo_all_blocks=1 00:37:02.305 --rc geninfo_unexecuted_blocks=1 00:37:02.305 00:37:02.305 ' 00:37:02.305 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:02.305 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:37:02.305 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:02.305 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:02.305 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:02.305 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:02.305 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:02.305 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:02.305 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:02.305 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:02.305 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:02.305 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:02.305 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:02.305 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:02.305 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:02.305 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:02.305 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:02.305 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:02.305 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:02.305 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:37:02.563 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:02.563 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:02.563 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:02.563 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.563 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.563 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.563 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:37:02.563 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.563 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:37:02.563 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:02.563 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:02.563 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:02.563 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:02.563 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:02.564 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:02.564 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:02.564 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:02.564 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:02.564 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:02.564 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:02.564 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:37:02.564 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:37:02.564 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:02.564 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:02.564 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:02.564 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:02.564 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:02.564 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:02.564 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:02.564 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:02.564 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:02.564 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:02.564 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:37:02.564 03:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:07.834 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:07.834 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:07.834 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:07.835 Found net devices under 0000:af:00.0: cvl_0_0 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:07.835 Found net devices under 0000:af:00.1: cvl_0_1 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:07.835 03:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:08.093 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:08.093 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:08.093 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:08.093 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:08.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:08.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:37:08.352 00:37:08.352 --- 10.0.0.2 ping statistics --- 00:37:08.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:08.352 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:08.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:08.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:37:08.352 00:37:08.352 --- 10.0.0.1 ping statistics --- 00:37:08.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:08.352 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=394219 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 394219 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 394219 ']' 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:08.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:08.352 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:08.352 [2024-12-14 03:18:23.389171] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:08.352 [2024-12-14 03:18:23.390060] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:37:08.352 [2024-12-14 03:18:23.390092] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:08.352 [2024-12-14 03:18:23.465075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:08.611 [2024-12-14 03:18:23.486968] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:08.611 [2024-12-14 03:18:23.486999] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:08.612 [2024-12-14 03:18:23.487006] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:08.612 [2024-12-14 03:18:23.487012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:08.612 [2024-12-14 03:18:23.487017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:08.612 [2024-12-14 03:18:23.488218] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:37:08.612 [2024-12-14 03:18:23.488342] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:08.612 [2024-12-14 03:18:23.488343] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:37:08.612 [2024-12-14 03:18:23.550241] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:08.612 [2024-12-14 03:18:23.551143] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:08.612 [2024-12-14 03:18:23.551343] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:08.612 [2024-12-14 03:18:23.551489] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:08.612 [2024-12-14 03:18:23.617133] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:08.612 Malloc0 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:08.612 Delay0 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:08.612 [2024-12-14 03:18:23.709083] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.612 03:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:37:08.870 [2024-12-14 03:18:23.797057] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:37:11.403 Initializing NVMe Controllers 00:37:11.403 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:11.403 controller IO queue size 128 less than required 00:37:11.403 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:37:11.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:37:11.403 Initialization complete. Launching workers. 00:37:11.403 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37957 00:37:11.403 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38018, failed to submit 66 00:37:11.403 success 37957, unsuccessful 61, failed 0 00:37:11.403 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:11.403 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.403 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:11.403 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.403 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:37:11.403 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:37:11.403 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:11.403 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:37:11.403 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:11.403 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:37:11.403 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:11.403 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:11.403 rmmod nvme_tcp 00:37:11.403 rmmod nvme_fabrics 00:37:11.403 rmmod nvme_keyring 00:37:11.403 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:11.403 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:37:11.403 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:37:11.403 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 394219 ']' 00:37:11.403 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 394219 00:37:11.403 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 394219 ']' 00:37:11.403 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 394219 00:37:11.403 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:37:11.403 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:11.403 03:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394219 00:37:11.403 03:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:11.403 03:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:11.403 03:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394219' 00:37:11.403 killing process with pid 394219 00:37:11.403 03:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 394219 00:37:11.403 03:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 394219 00:37:11.403 03:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:11.403 03:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:11.403 03:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:11.403 03:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:37:11.403 03:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:37:11.403 03:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:11.403 03:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:37:11.403 03:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:11.403 03:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:11.403 03:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:11.403 03:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:11.403 03:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:13.309 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:13.309 00:37:13.309 real 0m11.033s 00:37:13.309 user 0m10.282s 00:37:13.309 sys 0m5.546s 00:37:13.309 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:13.309 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:13.309 ************************************ 00:37:13.309 END TEST nvmf_abort 00:37:13.309 ************************************ 00:37:13.309 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:13.309 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:13.309 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:13.309 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:13.309 ************************************ 00:37:13.309 START TEST nvmf_ns_hotplug_stress 00:37:13.309 ************************************ 00:37:13.309 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:13.309 * Looking for test storage... 00:37:13.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:37:13.569 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:13.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:13.570 --rc genhtml_branch_coverage=1 00:37:13.570 --rc genhtml_function_coverage=1 00:37:13.570 --rc genhtml_legend=1 00:37:13.570 --rc geninfo_all_blocks=1 00:37:13.570 --rc geninfo_unexecuted_blocks=1 00:37:13.570 00:37:13.570 ' 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:13.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:13.570 --rc genhtml_branch_coverage=1 00:37:13.570 --rc genhtml_function_coverage=1 00:37:13.570 --rc genhtml_legend=1 00:37:13.570 --rc geninfo_all_blocks=1 00:37:13.570 --rc geninfo_unexecuted_blocks=1 00:37:13.570 00:37:13.570 ' 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:13.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:13.570 --rc genhtml_branch_coverage=1 00:37:13.570 --rc genhtml_function_coverage=1 00:37:13.570 --rc genhtml_legend=1 00:37:13.570 --rc geninfo_all_blocks=1 00:37:13.570 --rc geninfo_unexecuted_blocks=1 00:37:13.570 00:37:13.570 ' 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:13.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:13.570 --rc genhtml_branch_coverage=1 00:37:13.570 --rc genhtml_function_coverage=1 00:37:13.570 --rc genhtml_legend=1 00:37:13.570 --rc geninfo_all_blocks=1 00:37:13.570 --rc geninfo_unexecuted_blocks=1 00:37:13.570 00:37:13.570 ' 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:37:13.570 03:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:20.143 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:20.143 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:20.144 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:20.144 Found net devices under 0000:af:00.0: cvl_0_0 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:20.144 Found net devices under 0000:af:00.1: cvl_0_1 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:20.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:20.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:37:20.144 00:37:20.144 --- 10.0.0.2 ping statistics --- 00:37:20.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:20.144 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:20.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:20.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:37:20.144 00:37:20.144 --- 10.0.0.1 ping statistics --- 00:37:20.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:20.144 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=396503 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 396503 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 396503 ']' 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:20.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:20.144 [2024-12-14 03:18:34.486180] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:20.144 [2024-12-14 03:18:34.487071] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:37:20.144 [2024-12-14 03:18:34.487105] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:20.144 [2024-12-14 03:18:34.550827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:20.144 [2024-12-14 03:18:34.572895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:20.144 [2024-12-14 03:18:34.572930] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:20.144 [2024-12-14 03:18:34.572937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:20.144 [2024-12-14 03:18:34.572944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:20.144 [2024-12-14 03:18:34.572949] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:20.144 [2024-12-14 03:18:34.574091] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:37:20.144 [2024-12-14 03:18:34.574199] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:20.144 [2024-12-14 03:18:34.574200] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:37:20.144 [2024-12-14 03:18:34.636564] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:20.144 [2024-12-14 03:18:34.637471] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:20.144 [2024-12-14 03:18:34.637795] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:20.144 [2024-12-14 03:18:34.637910] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:20.144 [2024-12-14 03:18:34.870847] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:20.144 03:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:20.144 03:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:20.144 [2024-12-14 03:18:35.271129] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:20.408 03:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:20.408 03:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:37:20.667 Malloc0 00:37:20.667 03:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:20.926 Delay0 00:37:20.926 03:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:21.184 03:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:37:21.184 NULL1 00:37:21.184 03:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:37:21.443 03:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=396554 00:37:21.443 03:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:37:21.443 03:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:21.443 03:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:21.702 03:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:21.961 03:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:37:21.961 03:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:37:21.961 true 00:37:21.961 03:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:21.961 03:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:22.219 03:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:22.477 03:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:37:22.477 03:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:37:22.477 true 00:37:22.736 03:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:22.736 03:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:22.736 Read completed with error (sct=0, sc=11) 00:37:22.736 03:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:22.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.995 03:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:37:22.995 03:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:37:23.253 true 00:37:23.253 03:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:23.253 03:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:24.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:24.189 03:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:24.189 03:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:37:24.189 03:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:37:24.448 true 00:37:24.448 03:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:24.448 03:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:24.706 03:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:24.964 03:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:37:24.964 03:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:37:24.964 true 00:37:24.964 03:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:24.964 03:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:26.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:26.339 03:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:26.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:26.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:26.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:26.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:26.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:26.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:26.339 03:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:37:26.339 03:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:37:26.597 true 00:37:26.597 03:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:26.597 03:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:27.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:27.532 03:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:27.532 03:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:37:27.532 03:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:37:27.790 true 00:37:27.790 03:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:27.790 03:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:28.048 03:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:28.307 03:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:37:28.307 03:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:37:28.307 true 00:37:28.307 03:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:28.307 03:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:29.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:29.683 03:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:29.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:29.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:29.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:29.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:29.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:29.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:29.941 03:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:37:29.941 03:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:37:29.941 true 00:37:29.941 03:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:29.941 03:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:30.875 03:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:31.134 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:37:31.134 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:37:31.134 true 00:37:31.134 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:31.134 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:31.392 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:31.651 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:37:31.651 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:37:31.910 true 00:37:31.910 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:31.910 03:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:32.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:32.845 03:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:32.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:33.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:33.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:33.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:33.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:33.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:33.104 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:37:33.104 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:37:33.363 true 00:37:33.363 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:33.363 03:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:34.299 03:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:34.299 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:34.299 03:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:37:34.299 03:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:37:34.557 true 00:37:34.557 03:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:34.557 03:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:34.816 03:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:35.074 03:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:37:35.074 03:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:37:35.074 true 00:37:35.074 03:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:35.074 03:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:36.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:36.451 03:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:36.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:36.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:36.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:36.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:36.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:36.451 03:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:37:36.451 03:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:37:36.710 true 00:37:36.710 03:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:36.710 03:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:37.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:37.645 03:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:37.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:37.904 03:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:37:37.904 03:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:37:37.904 true 00:37:37.904 03:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:37.904 03:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:38.162 03:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:38.421 03:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:37:38.421 03:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:37:38.421 true 00:37:38.421 03:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:38.679 03:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:39.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:39.616 03:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:39.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:39.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:39.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:39.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:39.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:39.875 03:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:37:39.875 03:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:37:40.134 true 00:37:40.134 03:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:40.134 03:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:41.068 03:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:41.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:41.068 03:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:37:41.068 03:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:37:41.327 true 00:37:41.327 03:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:41.327 03:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:41.585 03:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:41.843 03:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:37:41.843 03:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:37:41.843 true 00:37:41.843 03:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:41.843 03:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:43.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:43.219 03:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:43.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:43.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:43.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:43.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:43.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:43.219 03:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:37:43.219 03:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:37:43.477 true 00:37:43.477 03:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:43.477 03:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:44.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:44.413 03:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:44.413 03:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:37:44.413 03:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:37:44.672 true 00:37:44.672 03:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:44.672 03:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:44.672 03:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:44.930 03:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:37:44.930 03:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:37:45.188 true 00:37:45.189 03:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:45.189 03:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:46.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:46.565 03:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:46.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:46.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:46.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:46.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:46.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:46.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:46.565 03:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:37:46.565 03:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:37:46.824 true 00:37:46.824 03:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:46.824 03:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:47.761 03:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:47.761 03:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:37:47.761 03:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:37:48.019 true 00:37:48.019 03:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:48.019 03:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:48.278 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:48.278 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:37:48.278 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:37:48.537 true 00:37:48.537 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:48.537 03:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:49.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:49.912 03:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:49.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:49.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:49.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:49.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:49.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:49.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:49.912 03:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:37:49.912 03:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:37:50.169 true 00:37:50.169 03:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:50.169 03:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:51.103 03:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:51.103 03:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:37:51.103 03:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:37:51.362 true 00:37:51.362 03:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:51.362 03:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:51.362 03:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:51.620 03:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:37:51.620 03:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:37:51.879 true 00:37:51.879 03:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:51.879 03:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:51.879 Initializing NVMe Controllers 00:37:51.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:51.879 Controller IO queue size 128, less than required. 00:37:51.879 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:51.879 Controller IO queue size 128, less than required. 00:37:51.879 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:51.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:51.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:51.879 Initialization complete. Launching workers. 00:37:51.879 ======================================================== 00:37:51.879 Latency(us) 00:37:51.879 Device Information : IOPS MiB/s Average min max 00:37:51.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2278.98 1.11 37199.81 1719.01 1128609.92 00:37:51.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17600.47 8.59 7250.88 1605.27 368322.48 00:37:51.879 ======================================================== 00:37:51.879 Total : 19879.45 9.71 10684.22 1605.27 1128609.92 00:37:51.879 00:37:52.138 03:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:52.138 03:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:37:52.138 03:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:37:52.396 true 00:37:52.396 03:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 396554 00:37:52.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (396554) - No such process 00:37:52.396 03:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 396554 00:37:52.396 03:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:52.654 03:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:52.913 03:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:37:52.913 03:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:37:52.913 03:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:37:52.913 03:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:52.913 03:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:37:52.913 null0 00:37:52.913 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:52.913 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:52.913 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:37:53.172 null1 00:37:53.172 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:53.172 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:53.172 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:37:53.430 null2 00:37:53.430 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:53.430 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:53.430 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:37:53.430 null3 00:37:53.430 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:53.430 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:53.430 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:37:53.689 null4 00:37:53.689 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:53.689 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:53.689 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:37:53.947 null5 00:37:53.948 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:53.948 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:53.948 03:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:37:53.948 null6 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:37:54.207 null7 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:54.207 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 397033 397034 397037 397038 397040 397042 397043 397045 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.208 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:54.467 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:54.467 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:54.467 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:54.467 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:54.467 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:54.467 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:54.467 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:54.467 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:54.726 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.726 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.726 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:54.726 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.726 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.726 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:54.726 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.726 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.726 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:54.726 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.726 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.726 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:54.726 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.727 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.727 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:54.727 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.727 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.727 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:54.727 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.727 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.727 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.727 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.727 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:54.727 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:54.985 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:54.985 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:54.985 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:54.985 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:54.985 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:54.985 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:54.985 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:54.985 03:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:54.985 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.985 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.985 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:54.985 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.985 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.985 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:54.985 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.985 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.985 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:54.985 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.985 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.985 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:54.986 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.986 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.986 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:54.986 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.986 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.986 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:54.986 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.986 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.986 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:54.986 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.986 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.986 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:55.245 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:55.245 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:55.245 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:55.245 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:55.245 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:55.245 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:55.245 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:55.245 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:55.503 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.503 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.504 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:55.504 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.504 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.504 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:55.504 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.504 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.504 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:55.504 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.504 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.504 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:55.504 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.504 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.504 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:55.504 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.504 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.504 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:55.504 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.504 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.504 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:55.504 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.504 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.504 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:55.763 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:55.763 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:55.763 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:55.763 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:55.763 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:55.763 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:55.763 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:55.763 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:56.021 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.021 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.022 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:56.022 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.022 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.022 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:56.022 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.022 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.022 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:56.022 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.022 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.022 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.022 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.022 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:56.022 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:56.022 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.022 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.022 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:56.022 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.022 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.022 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:56.022 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.022 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.022 03:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:56.022 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:56.022 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:56.022 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:56.022 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:56.022 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:56.022 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:56.022 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:56.022 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:56.280 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.280 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.280 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:56.280 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.280 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.280 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:56.280 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.280 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.280 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:56.281 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.281 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.281 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:56.281 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.281 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.281 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.281 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:56.281 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.281 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:56.281 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.281 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.281 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:56.281 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.281 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.281 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:56.539 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:56.539 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:56.539 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:56.539 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:56.539 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:56.539 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:56.539 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:56.539 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.798 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:57.057 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:57.057 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:57.057 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:57.057 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:57.057 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:57.057 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:57.057 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:57.057 03:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:57.057 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.057 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.057 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:57.057 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.057 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.057 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:57.057 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.057 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.057 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:57.057 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.057 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.057 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:57.317 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.317 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.317 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.317 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.317 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:57.317 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:57.317 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.317 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.317 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:57.317 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.317 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.317 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:57.317 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:57.317 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:57.317 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:57.317 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:57.317 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:57.317 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:57.317 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:57.317 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.576 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:57.835 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:57.835 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:57.835 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:57.835 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:57.835 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:57.835 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:57.835 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:57.835 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:58.093 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.093 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.093 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:58.093 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.093 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.094 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:58.094 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.094 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.094 03:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:58.094 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.094 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.094 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.094 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.094 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:58.094 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:58.094 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.094 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.094 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.094 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.094 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:58.094 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:58.094 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.094 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.094 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:58.094 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:58.094 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:58.094 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:58.094 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:58.094 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:58.094 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:58.094 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:58.352 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:58.352 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.352 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.352 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.352 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.352 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.352 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.352 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.352 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.352 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.352 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.352 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.352 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.352 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.352 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.352 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.352 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.352 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:37:58.352 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:37:58.352 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:58.353 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:37:58.353 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:58.353 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:37:58.353 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:58.353 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:58.353 rmmod nvme_tcp 00:37:58.353 rmmod nvme_fabrics 00:37:58.353 rmmod nvme_keyring 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 396503 ']' 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 396503 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 396503 ']' 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 396503 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396503 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396503' 00:37:58.612 killing process with pid 396503 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 396503 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 396503 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:58.612 03:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:01.146 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:01.146 00:38:01.146 real 0m47.457s 00:38:01.146 user 2m59.577s 00:38:01.146 sys 0m19.671s 00:38:01.146 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:01.146 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:01.146 ************************************ 00:38:01.146 END TEST nvmf_ns_hotplug_stress 00:38:01.146 ************************************ 00:38:01.146 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:01.146 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:01.146 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:01.146 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:01.146 ************************************ 00:38:01.146 START TEST nvmf_delete_subsystem 00:38:01.146 ************************************ 00:38:01.146 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:01.146 * Looking for test storage... 00:38:01.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:01.146 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:01.146 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:38:01.146 03:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:01.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.146 --rc genhtml_branch_coverage=1 00:38:01.146 --rc genhtml_function_coverage=1 00:38:01.146 --rc genhtml_legend=1 00:38:01.146 --rc geninfo_all_blocks=1 00:38:01.146 --rc geninfo_unexecuted_blocks=1 00:38:01.146 00:38:01.146 ' 00:38:01.146 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:01.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.146 --rc genhtml_branch_coverage=1 00:38:01.146 --rc genhtml_function_coverage=1 00:38:01.146 --rc genhtml_legend=1 00:38:01.146 --rc geninfo_all_blocks=1 00:38:01.147 --rc geninfo_unexecuted_blocks=1 00:38:01.147 00:38:01.147 ' 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:01.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.147 --rc genhtml_branch_coverage=1 00:38:01.147 --rc genhtml_function_coverage=1 00:38:01.147 --rc genhtml_legend=1 00:38:01.147 --rc geninfo_all_blocks=1 00:38:01.147 --rc geninfo_unexecuted_blocks=1 00:38:01.147 00:38:01.147 ' 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:01.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.147 --rc genhtml_branch_coverage=1 00:38:01.147 --rc genhtml_function_coverage=1 00:38:01.147 --rc genhtml_legend=1 00:38:01.147 --rc geninfo_all_blocks=1 00:38:01.147 --rc geninfo_unexecuted_blocks=1 00:38:01.147 00:38:01.147 ' 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:38:01.147 03:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:07.714 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:07.714 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:07.714 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:07.715 Found net devices under 0000:af:00.0: cvl_0_0 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:07.715 Found net devices under 0000:af:00.1: cvl_0_1 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:07.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:07.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:38:07.715 00:38:07.715 --- 10.0.0.2 ping statistics --- 00:38:07.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:07.715 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:07.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:07.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:38:07.715 00:38:07.715 --- 10.0.0.1 ping statistics --- 00:38:07.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:07.715 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=399476 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 399476 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 399476 ']' 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:07.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:07.715 03:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:07.715 [2024-12-14 03:19:21.939873] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:07.715 [2024-12-14 03:19:21.940763] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:07.715 [2024-12-14 03:19:21.940796] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:07.715 [2024-12-14 03:19:22.020218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:07.715 [2024-12-14 03:19:22.041390] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:07.715 [2024-12-14 03:19:22.041427] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:07.715 [2024-12-14 03:19:22.041434] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:07.715 [2024-12-14 03:19:22.041440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:07.715 [2024-12-14 03:19:22.041445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:07.715 [2024-12-14 03:19:22.042551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:07.715 [2024-12-14 03:19:22.042554] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:07.715 [2024-12-14 03:19:22.104605] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:07.715 [2024-12-14 03:19:22.105168] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:07.715 [2024-12-14 03:19:22.105371] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:07.716 [2024-12-14 03:19:22.183365] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:07.716 [2024-12-14 03:19:22.211660] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:07.716 NULL1 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:07.716 Delay0 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=399499 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:38:07.716 03:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:07.716 [2024-12-14 03:19:22.323091] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:09.617 03:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:09.617 03:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.617 03:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:09.617 Write completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 starting I/O failed: -6 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Write completed with error (sct=0, sc=8) 00:38:09.617 Write completed with error (sct=0, sc=8) 00:38:09.617 starting I/O failed: -6 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Write completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 starting I/O failed: -6 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 starting I/O failed: -6 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 starting I/O failed: -6 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Write completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Write completed with error (sct=0, sc=8) 00:38:09.617 starting I/O failed: -6 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Write completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 starting I/O failed: -6 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 starting I/O failed: -6 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Write completed with error (sct=0, sc=8) 00:38:09.617 Write completed with error (sct=0, sc=8) 00:38:09.617 Write completed with error (sct=0, sc=8) 00:38:09.617 starting I/O failed: -6 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Write completed with error (sct=0, sc=8) 00:38:09.617 Write completed with error (sct=0, sc=8) 00:38:09.617 Write completed with error (sct=0, sc=8) 00:38:09.617 starting I/O failed: -6 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 starting I/O failed: -6 00:38:09.617 [2024-12-14 03:19:24.460367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x848f70 is same with the state(6) to be set 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Write completed with error (sct=0, sc=8) 00:38:09.617 Write completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Write completed with error (sct=0, sc=8) 00:38:09.617 Write completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Write completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Write completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Write completed with error (sct=0, sc=8) 00:38:09.617 Write completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Read completed with error (sct=0, sc=8) 00:38:09.617 Write completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 starting I/O failed: -6 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 starting I/O failed: -6 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 starting I/O failed: -6 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 starting I/O failed: -6 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 starting I/O failed: -6 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 starting I/O failed: -6 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 starting I/O failed: -6 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 starting I/O failed: -6 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 starting I/O failed: -6 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 starting I/O failed: -6 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 starting I/O failed: -6 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 [2024-12-14 03:19:24.461077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7f5000d4d0 is same with the state(6) to be set 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Write completed with error (sct=0, sc=8) 00:38:09.618 Read completed with error (sct=0, sc=8) 00:38:10.557 [2024-12-14 03:19:25.419694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x847190 is same with the state(6) to be set 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 [2024-12-14 03:19:25.463179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7f5000d060 is same with the state(6) to be set 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 [2024-12-14 03:19:25.463985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x849400 is same with the state(6) to be set 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 [2024-12-14 03:19:25.464149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8495e0 is same with the state(6) to be set 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Write completed with error (sct=0, sc=8) 00:38:10.557 Read completed with error (sct=0, sc=8) 00:38:10.557 [2024-12-14 03:19:25.464647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7f5000d800 is same with the state(6) to be set 00:38:10.557 Initializing NVMe Controllers 00:38:10.557 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:10.557 Controller IO queue size 128, less than required. 00:38:10.557 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:10.557 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:10.557 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:10.557 Initialization complete. Launching workers. 00:38:10.557 ======================================================== 00:38:10.557 Latency(us) 00:38:10.557 Device Information : IOPS MiB/s Average min max 00:38:10.557 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.77 0.08 909616.27 303.30 1043913.36 00:38:10.557 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 167.74 0.08 899258.28 246.72 1042321.66 00:38:10.557 ======================================================== 00:38:10.557 Total : 331.51 0.16 904375.25 246.72 1043913.36 00:38:10.558 00:38:10.558 [2024-12-14 03:19:25.465394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x847190 (9): Bad file descriptor 00:38:10.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:38:10.558 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.558 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:38:10.558 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 399499 00:38:10.558 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:38:11.125 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:38:11.125 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 399499 00:38:11.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (399499) - No such process 00:38:11.125 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 399499 00:38:11.125 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:38:11.125 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 399499 00:38:11.125 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:38:11.125 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:11.125 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:38:11.125 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:11.125 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 399499 00:38:11.125 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:38:11.125 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:11.125 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:11.125 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:11.125 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:11.125 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.125 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:11.125 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.125 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:11.125 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.125 03:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:11.125 [2024-12-14 03:19:25.999612] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:11.125 03:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.125 03:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:11.125 03:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.125 03:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:11.125 03:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.125 03:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=399551 00:38:11.125 03:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:38:11.125 03:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:11.125 03:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 399551 00:38:11.125 03:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:11.125 [2024-12-14 03:19:26.087370] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:11.693 03:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:11.693 03:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 399551 00:38:11.693 03:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:11.951 03:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:11.951 03:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 399551 00:38:11.951 03:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:12.518 03:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:12.518 03:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 399551 00:38:12.518 03:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:13.086 03:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:13.086 03:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 399551 00:38:13.086 03:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:13.653 03:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:13.653 03:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 399551 00:38:13.653 03:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:14.219 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:14.219 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 399551 00:38:14.219 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:14.219 Initializing NVMe Controllers 00:38:14.219 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:14.220 Controller IO queue size 128, less than required. 00:38:14.220 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:14.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:14.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:14.220 Initialization complete. Launching workers. 00:38:14.220 ======================================================== 00:38:14.220 Latency(us) 00:38:14.220 Device Information : IOPS MiB/s Average min max 00:38:14.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003471.40 1000132.99 1040636.46 00:38:14.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005506.31 1000535.38 1042677.72 00:38:14.220 ======================================================== 00:38:14.220 Total : 256.00 0.12 1004488.85 1000132.99 1042677.72 00:38:14.220 00:38:14.478 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:14.478 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 399551 00:38:14.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (399551) - No such process 00:38:14.478 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 399551 00:38:14.478 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:38:14.478 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:38:14.478 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:14.478 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:38:14.478 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:14.478 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:38:14.478 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:14.478 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:14.478 rmmod nvme_tcp 00:38:14.478 rmmod nvme_fabrics 00:38:14.478 rmmod nvme_keyring 00:38:14.478 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:14.478 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:38:14.478 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:38:14.478 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 399476 ']' 00:38:14.478 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 399476 00:38:14.478 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 399476 ']' 00:38:14.478 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 399476 00:38:14.478 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:38:14.737 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:14.738 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 399476 00:38:14.738 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:14.738 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:14.738 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 399476' 00:38:14.738 killing process with pid 399476 00:38:14.738 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 399476 00:38:14.738 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 399476 00:38:14.738 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:14.738 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:14.738 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:14.738 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:38:14.738 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:38:14.738 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:14.738 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:38:14.738 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:14.738 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:14.738 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:14.738 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:14.738 03:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:17.273 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:17.273 00:38:17.273 real 0m16.009s 00:38:17.273 user 0m26.110s 00:38:17.273 sys 0m5.931s 00:38:17.273 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:17.273 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:17.273 ************************************ 00:38:17.273 END TEST nvmf_delete_subsystem 00:38:17.273 ************************************ 00:38:17.273 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:38:17.273 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:17.273 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:17.273 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:17.273 ************************************ 00:38:17.273 START TEST nvmf_host_management 00:38:17.273 ************************************ 00:38:17.273 03:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:38:17.273 * Looking for test storage... 00:38:17.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:17.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.273 --rc genhtml_branch_coverage=1 00:38:17.273 --rc genhtml_function_coverage=1 00:38:17.273 --rc genhtml_legend=1 00:38:17.273 --rc geninfo_all_blocks=1 00:38:17.273 --rc geninfo_unexecuted_blocks=1 00:38:17.273 00:38:17.273 ' 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:17.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.273 --rc genhtml_branch_coverage=1 00:38:17.273 --rc genhtml_function_coverage=1 00:38:17.273 --rc genhtml_legend=1 00:38:17.273 --rc geninfo_all_blocks=1 00:38:17.273 --rc geninfo_unexecuted_blocks=1 00:38:17.273 00:38:17.273 ' 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:17.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.273 --rc genhtml_branch_coverage=1 00:38:17.273 --rc genhtml_function_coverage=1 00:38:17.273 --rc genhtml_legend=1 00:38:17.273 --rc geninfo_all_blocks=1 00:38:17.273 --rc geninfo_unexecuted_blocks=1 00:38:17.273 00:38:17.273 ' 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:17.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.273 --rc genhtml_branch_coverage=1 00:38:17.273 --rc genhtml_function_coverage=1 00:38:17.273 --rc genhtml_legend=1 00:38:17.273 --rc geninfo_all_blocks=1 00:38:17.273 --rc geninfo_unexecuted_blocks=1 00:38:17.273 00:38:17.273 ' 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:38:17.273 03:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:22.685 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:22.685 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:22.685 Found net devices under 0000:af:00.0: cvl_0_0 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:22.685 Found net devices under 0000:af:00.1: cvl_0_1 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:22.685 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:22.686 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:22.686 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:22.686 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:22.686 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:22.686 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:22.686 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:22.686 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:22.686 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:22.686 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:22.686 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:22.686 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:22.686 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:22.686 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:22.686 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:22.686 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:22.686 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:22.686 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:22.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:22.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.429 ms 00:38:22.950 00:38:22.950 --- 10.0.0.2 ping statistics --- 00:38:22.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:22.950 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:22.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:22.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:38:22.950 00:38:22.950 --- 10.0.0.1 ping statistics --- 00:38:22.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:22.950 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=401834 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 401834 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 401834 ']' 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:22.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:22.950 03:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:22.950 [2024-12-14 03:19:38.044526] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:22.950 [2024-12-14 03:19:38.045500] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:22.950 [2024-12-14 03:19:38.045538] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:23.210 [2024-12-14 03:19:38.122869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:23.210 [2024-12-14 03:19:38.146631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:23.210 [2024-12-14 03:19:38.146666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:23.210 [2024-12-14 03:19:38.146673] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:23.210 [2024-12-14 03:19:38.146679] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:23.210 [2024-12-14 03:19:38.146684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:23.210 [2024-12-14 03:19:38.148121] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:23.210 [2024-12-14 03:19:38.148241] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:38:23.210 [2024-12-14 03:19:38.148350] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:23.210 [2024-12-14 03:19:38.148351] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:38:23.210 [2024-12-14 03:19:38.211585] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:23.210 [2024-12-14 03:19:38.212791] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:23.210 [2024-12-14 03:19:38.212922] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:23.210 [2024-12-14 03:19:38.213241] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:23.210 [2024-12-14 03:19:38.213279] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:23.210 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:23.210 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:38:23.210 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:23.210 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:23.210 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:23.210 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:23.210 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:23.210 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.210 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:23.210 [2024-12-14 03:19:38.281107] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:23.210 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.210 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:38:23.210 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:23.210 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:23.210 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:38:23.210 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:38:23.210 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:38:23.210 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.210 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:23.469 Malloc0 00:38:23.469 [2024-12-14 03:19:38.365380] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:23.469 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.469 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:38:23.469 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:23.469 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:23.469 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=401882 00:38:23.469 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 401882 /var/tmp/bdevperf.sock 00:38:23.469 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 401882 ']' 00:38:23.469 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:23.469 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:38:23.469 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:38:23.469 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:23.469 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:38:23.469 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:23.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:23.469 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:23.469 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:38:23.469 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:23.469 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:23.469 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:23.469 { 00:38:23.469 "params": { 00:38:23.469 "name": "Nvme$subsystem", 00:38:23.469 "trtype": "$TEST_TRANSPORT", 00:38:23.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:23.469 "adrfam": "ipv4", 00:38:23.469 "trsvcid": "$NVMF_PORT", 00:38:23.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:23.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:23.469 "hdgst": ${hdgst:-false}, 00:38:23.469 "ddgst": ${ddgst:-false} 00:38:23.469 }, 00:38:23.469 "method": "bdev_nvme_attach_controller" 00:38:23.469 } 00:38:23.469 EOF 00:38:23.469 )") 00:38:23.469 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:38:23.469 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:38:23.469 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:38:23.469 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:23.469 "params": { 00:38:23.469 "name": "Nvme0", 00:38:23.469 "trtype": "tcp", 00:38:23.469 "traddr": "10.0.0.2", 00:38:23.469 "adrfam": "ipv4", 00:38:23.469 "trsvcid": "4420", 00:38:23.469 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:23.469 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:23.469 "hdgst": false, 00:38:23.469 "ddgst": false 00:38:23.469 }, 00:38:23.469 "method": "bdev_nvme_attach_controller" 00:38:23.469 }' 00:38:23.469 [2024-12-14 03:19:38.460710] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:23.469 [2024-12-14 03:19:38.460756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid401882 ] 00:38:23.469 [2024-12-14 03:19:38.536466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.469 [2024-12-14 03:19:38.558600] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:23.730 Running I/O for 10 seconds... 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=102 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 102 -ge 100 ']' 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:23.730 [2024-12-14 03:19:38.844827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dee240 is same with the state(6) to be set 00:38:23.730 [2024-12-14 03:19:38.844867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dee240 is same with the state(6) to be set 00:38:23.730 [2024-12-14 03:19:38.844875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dee240 is same with the state(6) to be set 00:38:23.730 [2024-12-14 03:19:38.844882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dee240 is same with the state(6) to be set 00:38:23.730 [2024-12-14 03:19:38.844889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dee240 is same with the state(6) to be set 00:38:23.730 [2024-12-14 03:19:38.844895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dee240 is same with the state(6) to be set 00:38:23.730 [2024-12-14 03:19:38.844901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dee240 is same with the state(6) to be set 00:38:23.730 [2024-12-14 03:19:38.844906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dee240 is same with the state(6) to be set 00:38:23.730 [2024-12-14 03:19:38.844912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dee240 is same with the state(6) to be set 00:38:23.730 [2024-12-14 03:19:38.844918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dee240 is same with the state(6) to be set 00:38:23.730 [2024-12-14 03:19:38.844923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dee240 is same with the state(6) to be set 00:38:23.730 [2024-12-14 03:19:38.844929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dee240 is same with the state(6) to be set 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.730 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:23.730 [2024-12-14 03:19:38.856086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:23.730 [2024-12-14 03:19:38.856116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.730 [2024-12-14 03:19:38.856125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:23.730 [2024-12-14 03:19:38.856132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.730 [2024-12-14 03:19:38.856139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:23.730 [2024-12-14 03:19:38.856146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.730 [2024-12-14 03:19:38.856153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:23.730 [2024-12-14 03:19:38.856164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.730 [2024-12-14 03:19:38.856171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a15490 is same with the state(6) to be set 00:38:23.730 [2024-12-14 03:19:38.856247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.730 [2024-12-14 03:19:38.856257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.730 [2024-12-14 03:19:38.856270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.730 [2024-12-14 03:19:38.856277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.730 [2024-12-14 03:19:38.856285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.730 [2024-12-14 03:19:38.856292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.730 [2024-12-14 03:19:38.856300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.730 [2024-12-14 03:19:38.856306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.730 [2024-12-14 03:19:38.856321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.730 [2024-12-14 03:19:38.856328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.730 [2024-12-14 03:19:38.856337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.730 [2024-12-14 03:19:38.856344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.730 [2024-12-14 03:19:38.856352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.730 [2024-12-14 03:19:38.856359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.730 [2024-12-14 03:19:38.856367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.731 [2024-12-14 03:19:38.856954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.731 [2024-12-14 03:19:38.856962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.732 [2024-12-14 03:19:38.856968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.732 [2024-12-14 03:19:38.856975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.732 [2024-12-14 03:19:38.856981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.732 [2024-12-14 03:19:38.856992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.732 [2024-12-14 03:19:38.856999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.732 [2024-12-14 03:19:38.857007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.732 [2024-12-14 03:19:38.857013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.732 [2024-12-14 03:19:38.857021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.732 [2024-12-14 03:19:38.857027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.732 [2024-12-14 03:19:38.857035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.732 [2024-12-14 03:19:38.857041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.732 [2024-12-14 03:19:38.857050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.732 [2024-12-14 03:19:38.857057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.732 [2024-12-14 03:19:38.857064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.732 [2024-12-14 03:19:38.857071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.732 [2024-12-14 03:19:38.857079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.732 [2024-12-14 03:19:38.857085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.732 [2024-12-14 03:19:38.857092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.732 [2024-12-14 03:19:38.857100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.732 [2024-12-14 03:19:38.857107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.732 [2024-12-14 03:19:38.857114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.732 [2024-12-14 03:19:38.857122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.732 [2024-12-14 03:19:38.857128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.732 [2024-12-14 03:19:38.857136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.732 [2024-12-14 03:19:38.857143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.732 [2024-12-14 03:19:38.857151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.732 [2024-12-14 03:19:38.857158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.732 [2024-12-14 03:19:38.857166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.732 [2024-12-14 03:19:38.857174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.732 [2024-12-14 03:19:38.857181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.732 [2024-12-14 03:19:38.857188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.732 [2024-12-14 03:19:38.857196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:23.732 [2024-12-14 03:19:38.857202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:23.732 [2024-12-14 03:19:38.858123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:38:23.732 task offset: 24576 on job bdev=Nvme0n1 fails 00:38:23.732 00:38:23.732 Latency(us) 00:38:23.732 [2024-12-14T02:19:38.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:23.732 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:38:23.732 Job: Nvme0n1 ended in about 0.11 seconds with error 00:38:23.732 Verification LBA range: start 0x0 length 0x400 00:38:23.732 Nvme0n1 : 0.11 1764.67 110.29 588.22 0.00 25079.57 1482.36 26713.72 00:38:23.732 [2024-12-14T02:19:38.865Z] =================================================================================================================== 00:38:23.732 [2024-12-14T02:19:38.865Z] Total : 1764.67 110.29 588.22 0.00 25079.57 1482.36 26713.72 00:38:23.732 [2024-12-14 03:19:38.860447] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:23.732 [2024-12-14 03:19:38.860469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a15490 (9): Bad file descriptor 00:38:23.991 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.991 03:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:38:23.991 [2024-12-14 03:19:38.905364] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:38:24.927 03:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 401882 00:38:24.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (401882) - No such process 00:38:24.927 03:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:38:24.927 03:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:38:24.927 03:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:38:24.927 03:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:38:24.927 03:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:38:24.927 03:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:38:24.927 03:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:24.927 03:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:24.927 { 00:38:24.927 "params": { 00:38:24.927 "name": "Nvme$subsystem", 00:38:24.927 "trtype": "$TEST_TRANSPORT", 00:38:24.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:24.927 "adrfam": "ipv4", 00:38:24.927 "trsvcid": "$NVMF_PORT", 00:38:24.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:24.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:24.927 "hdgst": ${hdgst:-false}, 00:38:24.927 "ddgst": ${ddgst:-false} 00:38:24.927 }, 00:38:24.927 "method": "bdev_nvme_attach_controller" 00:38:24.927 } 00:38:24.927 EOF 00:38:24.927 )") 00:38:24.927 03:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:38:24.927 03:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:38:24.927 03:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:38:24.927 03:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:24.927 "params": { 00:38:24.927 "name": "Nvme0", 00:38:24.927 "trtype": "tcp", 00:38:24.927 "traddr": "10.0.0.2", 00:38:24.927 "adrfam": "ipv4", 00:38:24.927 "trsvcid": "4420", 00:38:24.927 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:24.927 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:24.928 "hdgst": false, 00:38:24.928 "ddgst": false 00:38:24.928 }, 00:38:24.928 "method": "bdev_nvme_attach_controller" 00:38:24.928 }' 00:38:24.928 [2024-12-14 03:19:39.916964] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:24.928 [2024-12-14 03:19:39.917012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid401909 ] 00:38:24.928 [2024-12-14 03:19:39.992177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.928 [2024-12-14 03:19:40.015503] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:25.186 Running I/O for 1 seconds... 00:38:26.122 2016.00 IOPS, 126.00 MiB/s 00:38:26.122 Latency(us) 00:38:26.122 [2024-12-14T02:19:41.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:26.122 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:38:26.122 Verification LBA range: start 0x0 length 0x400 00:38:26.122 Nvme0n1 : 1.02 2048.84 128.05 0.00 0.00 30631.80 2949.12 27088.21 00:38:26.122 [2024-12-14T02:19:41.255Z] =================================================================================================================== 00:38:26.122 [2024-12-14T02:19:41.255Z] Total : 2048.84 128.05 0.00 0.00 30631.80 2949.12 27088.21 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:26.381 rmmod nvme_tcp 00:38:26.381 rmmod nvme_fabrics 00:38:26.381 rmmod nvme_keyring 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 401834 ']' 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 401834 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 401834 ']' 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 401834 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 401834 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 401834' 00:38:26.381 killing process with pid 401834 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 401834 00:38:26.381 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 401834 00:38:26.640 [2024-12-14 03:19:41.602874] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:38:26.640 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:26.640 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:26.640 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:26.640 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:38:26.640 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:38:26.640 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:26.640 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:38:26.640 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:26.640 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:26.640 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:26.640 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:26.640 03:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:38:29.174 00:38:29.174 real 0m11.739s 00:38:29.174 user 0m15.780s 00:38:29.174 sys 0m5.931s 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:29.174 ************************************ 00:38:29.174 END TEST nvmf_host_management 00:38:29.174 ************************************ 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:29.174 ************************************ 00:38:29.174 START TEST nvmf_lvol 00:38:29.174 ************************************ 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:38:29.174 * Looking for test storage... 00:38:29.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:38:29.174 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:29.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.175 --rc genhtml_branch_coverage=1 00:38:29.175 --rc genhtml_function_coverage=1 00:38:29.175 --rc genhtml_legend=1 00:38:29.175 --rc geninfo_all_blocks=1 00:38:29.175 --rc geninfo_unexecuted_blocks=1 00:38:29.175 00:38:29.175 ' 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:29.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.175 --rc genhtml_branch_coverage=1 00:38:29.175 --rc genhtml_function_coverage=1 00:38:29.175 --rc genhtml_legend=1 00:38:29.175 --rc geninfo_all_blocks=1 00:38:29.175 --rc geninfo_unexecuted_blocks=1 00:38:29.175 00:38:29.175 ' 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:29.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.175 --rc genhtml_branch_coverage=1 00:38:29.175 --rc genhtml_function_coverage=1 00:38:29.175 --rc genhtml_legend=1 00:38:29.175 --rc geninfo_all_blocks=1 00:38:29.175 --rc geninfo_unexecuted_blocks=1 00:38:29.175 00:38:29.175 ' 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:29.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.175 --rc genhtml_branch_coverage=1 00:38:29.175 --rc genhtml_function_coverage=1 00:38:29.175 --rc genhtml_legend=1 00:38:29.175 --rc geninfo_all_blocks=1 00:38:29.175 --rc geninfo_unexecuted_blocks=1 00:38:29.175 00:38:29.175 ' 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:38:29.175 03:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:34.446 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:34.446 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:34.446 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:34.447 Found net devices under 0000:af:00.0: cvl_0_0 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:34.447 Found net devices under 0000:af:00.1: cvl_0_1 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:34.447 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:34.705 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:34.705 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:34.705 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:34.705 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:34.705 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:34.705 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:34.705 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:34.705 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:34.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:34.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:38:34.705 00:38:34.705 --- 10.0.0.2 ping statistics --- 00:38:34.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:34.705 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:38:34.705 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:34.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:34.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:38:34.705 00:38:34.705 --- 10.0.0.1 ping statistics --- 00:38:34.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:34.705 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:38:34.705 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:34.705 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:38:34.706 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:34.706 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:34.706 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:34.706 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:34.706 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:34.706 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:34.706 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:34.706 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:38:34.706 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:34.706 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:34.706 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:34.706 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=404166 00:38:34.706 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:38:34.706 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 404166 00:38:34.706 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 404166 ']' 00:38:34.706 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:34.706 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:34.706 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:34.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:34.706 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:34.706 03:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:34.964 [2024-12-14 03:19:49.845593] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:34.964 [2024-12-14 03:19:49.846463] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:34.964 [2024-12-14 03:19:49.846494] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:34.964 [2024-12-14 03:19:49.926007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:34.964 [2024-12-14 03:19:49.947226] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:34.964 [2024-12-14 03:19:49.947263] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:34.964 [2024-12-14 03:19:49.947270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:34.964 [2024-12-14 03:19:49.947276] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:34.965 [2024-12-14 03:19:49.947280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:34.965 [2024-12-14 03:19:49.948520] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:34.965 [2024-12-14 03:19:49.948630] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:34.965 [2024-12-14 03:19:49.948631] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:34.965 [2024-12-14 03:19:50.010970] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:34.965 [2024-12-14 03:19:50.011808] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:34.965 [2024-12-14 03:19:50.011934] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:34.965 [2024-12-14 03:19:50.012112] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:34.965 03:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:34.965 03:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:38:34.965 03:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:34.965 03:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:34.965 03:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:34.965 03:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:34.965 03:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:35.223 [2024-12-14 03:19:50.257345] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:35.223 03:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:35.482 03:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:38:35.482 03:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:35.741 03:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:38:35.741 03:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:38:36.000 03:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:38:36.000 03:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=43d1bbbe-3834-4eb1-981b-476a6812487b 00:38:36.000 03:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 43d1bbbe-3834-4eb1-981b-476a6812487b lvol 20 00:38:36.258 03:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=68edb8cc-7070-4941-ac10-3441c86e610c 00:38:36.258 03:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:36.516 03:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 68edb8cc-7070-4941-ac10-3441c86e610c 00:38:36.774 03:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:36.774 [2024-12-14 03:19:51.881195] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:37.032 03:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:37.032 03:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=404226 00:38:37.032 03:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:38:37.032 03:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:38:38.407 03:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 68edb8cc-7070-4941-ac10-3441c86e610c MY_SNAPSHOT 00:38:38.407 03:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0542e676-abb5-42b4-9625-21a272dead05 00:38:38.407 03:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 68edb8cc-7070-4941-ac10-3441c86e610c 30 00:38:38.665 03:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0542e676-abb5-42b4-9625-21a272dead05 MY_CLONE 00:38:38.924 03:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b4de0e1e-1802-47a9-bf04-c130bf414466 00:38:38.924 03:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b4de0e1e-1802-47a9-bf04-c130bf414466 00:38:39.182 03:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 404226 00:38:49.156 Initializing NVMe Controllers 00:38:49.156 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:49.156 Controller IO queue size 128, less than required. 00:38:49.156 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:49.156 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:38:49.156 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:38:49.156 Initialization complete. Launching workers. 00:38:49.156 ======================================================== 00:38:49.156 Latency(us) 00:38:49.156 Device Information : IOPS MiB/s Average min max 00:38:49.156 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12291.20 48.01 10414.04 1482.95 60003.60 00:38:49.156 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12165.20 47.52 10524.08 3606.36 58951.00 00:38:49.156 ======================================================== 00:38:49.156 Total : 24456.40 95.53 10468.78 1482.95 60003.60 00:38:49.156 00:38:49.156 03:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:49.156 03:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 68edb8cc-7070-4941-ac10-3441c86e610c 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 43d1bbbe-3834-4eb1-981b-476a6812487b 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:49.156 rmmod nvme_tcp 00:38:49.156 rmmod nvme_fabrics 00:38:49.156 rmmod nvme_keyring 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 404166 ']' 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 404166 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 404166 ']' 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 404166 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 404166 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 404166' 00:38:49.156 killing process with pid 404166 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 404166 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 404166 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:49.156 03:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:50.549 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:50.549 00:38:50.549 real 0m21.821s 00:38:50.549 user 0m56.034s 00:38:50.549 sys 0m9.703s 00:38:50.549 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:50.549 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:50.549 ************************************ 00:38:50.549 END TEST nvmf_lvol 00:38:50.549 ************************************ 00:38:50.549 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:50.549 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:50.549 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:50.549 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:50.549 ************************************ 00:38:50.549 START TEST nvmf_lvs_grow 00:38:50.549 ************************************ 00:38:50.549 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:50.809 * Looking for test storage... 00:38:50.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:50.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.809 --rc genhtml_branch_coverage=1 00:38:50.809 --rc genhtml_function_coverage=1 00:38:50.809 --rc genhtml_legend=1 00:38:50.809 --rc geninfo_all_blocks=1 00:38:50.809 --rc geninfo_unexecuted_blocks=1 00:38:50.809 00:38:50.809 ' 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:50.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.809 --rc genhtml_branch_coverage=1 00:38:50.809 --rc genhtml_function_coverage=1 00:38:50.809 --rc genhtml_legend=1 00:38:50.809 --rc geninfo_all_blocks=1 00:38:50.809 --rc geninfo_unexecuted_blocks=1 00:38:50.809 00:38:50.809 ' 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:50.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.809 --rc genhtml_branch_coverage=1 00:38:50.809 --rc genhtml_function_coverage=1 00:38:50.809 --rc genhtml_legend=1 00:38:50.809 --rc geninfo_all_blocks=1 00:38:50.809 --rc geninfo_unexecuted_blocks=1 00:38:50.809 00:38:50.809 ' 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:50.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.809 --rc genhtml_branch_coverage=1 00:38:50.809 --rc genhtml_function_coverage=1 00:38:50.809 --rc genhtml_legend=1 00:38:50.809 --rc geninfo_all_blocks=1 00:38:50.809 --rc geninfo_unexecuted_blocks=1 00:38:50.809 00:38:50.809 ' 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.809 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:38:50.810 03:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:57.378 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:57.378 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:57.378 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:57.379 Found net devices under 0000:af:00.0: cvl_0_0 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:57.379 Found net devices under 0000:af:00.1: cvl_0_1 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:57.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:57.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:38:57.379 00:38:57.379 --- 10.0.0.2 ping statistics --- 00:38:57.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:57.379 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:57.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:57.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:38:57.379 00:38:57.379 --- 10.0.0.1 ping statistics --- 00:38:57.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:57.379 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=406596 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 406596 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 406596 ']' 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:57.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:57.379 [2024-12-14 03:20:11.758116] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:57.379 [2024-12-14 03:20:11.759002] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:57.379 [2024-12-14 03:20:11.759032] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:57.379 [2024-12-14 03:20:11.838134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:57.379 [2024-12-14 03:20:11.859188] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:57.379 [2024-12-14 03:20:11.859219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:57.379 [2024-12-14 03:20:11.859226] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:57.379 [2024-12-14 03:20:11.859232] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:57.379 [2024-12-14 03:20:11.859237] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:57.379 [2024-12-14 03:20:11.859686] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:57.379 [2024-12-14 03:20:11.921274] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:57.379 [2024-12-14 03:20:11.921498] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:57.379 03:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:57.379 [2024-12-14 03:20:12.152343] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:57.379 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:38:57.379 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:57.379 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:57.379 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:57.379 ************************************ 00:38:57.380 START TEST lvs_grow_clean 00:38:57.380 ************************************ 00:38:57.380 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:38:57.380 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:57.380 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:57.380 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:57.380 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:57.380 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:57.380 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:57.380 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:57.380 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:57.380 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:57.380 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:57.380 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:57.639 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4c319b21-4fc8-4b85-8822-c0c525f0c5d7 00:38:57.639 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c319b21-4fc8-4b85-8822-c0c525f0c5d7 00:38:57.639 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:57.897 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:57.897 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:57.897 03:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4c319b21-4fc8-4b85-8822-c0c525f0c5d7 lvol 150 00:38:58.156 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6e115f68-d3f0-4d2d-9556-e1ee5d77f8ad 00:38:58.156 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:58.156 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:58.156 [2024-12-14 03:20:13.220060] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:58.156 [2024-12-14 03:20:13.220184] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:58.156 true 00:38:58.156 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c319b21-4fc8-4b85-8822-c0c525f0c5d7 00:38:58.156 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:58.414 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:58.414 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:58.673 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6e115f68-d3f0-4d2d-9556-e1ee5d77f8ad 00:38:58.931 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:58.931 [2024-12-14 03:20:13.968522] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:58.932 03:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:59.190 03:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=406669 00:38:59.190 03:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:59.190 03:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:59.190 03:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 406669 /var/tmp/bdevperf.sock 00:38:59.190 03:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 406669 ']' 00:38:59.190 03:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:59.190 03:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:59.190 03:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:59.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:59.190 03:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:59.190 03:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:59.190 [2024-12-14 03:20:14.223988] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:59.190 [2024-12-14 03:20:14.224033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406669 ] 00:38:59.190 [2024-12-14 03:20:14.297510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:59.190 [2024-12-14 03:20:14.319551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:59.449 03:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:59.449 03:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:38:59.449 03:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:59.708 Nvme0n1 00:38:59.708 03:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:59.966 [ 00:38:59.966 { 00:38:59.966 "name": "Nvme0n1", 00:38:59.966 "aliases": [ 00:38:59.966 "6e115f68-d3f0-4d2d-9556-e1ee5d77f8ad" 00:38:59.966 ], 00:38:59.966 "product_name": "NVMe disk", 00:38:59.966 "block_size": 4096, 00:38:59.966 "num_blocks": 38912, 00:38:59.966 "uuid": "6e115f68-d3f0-4d2d-9556-e1ee5d77f8ad", 00:38:59.966 "numa_id": 1, 00:38:59.966 "assigned_rate_limits": { 00:38:59.966 "rw_ios_per_sec": 0, 00:38:59.966 "rw_mbytes_per_sec": 0, 00:38:59.966 "r_mbytes_per_sec": 0, 00:38:59.966 "w_mbytes_per_sec": 0 00:38:59.966 }, 00:38:59.966 "claimed": false, 00:38:59.966 "zoned": false, 00:38:59.966 "supported_io_types": { 00:38:59.966 "read": true, 00:38:59.966 "write": true, 00:38:59.966 "unmap": true, 00:38:59.966 "flush": true, 00:38:59.966 "reset": true, 00:38:59.966 "nvme_admin": true, 00:38:59.966 "nvme_io": true, 00:38:59.966 "nvme_io_md": false, 00:38:59.966 "write_zeroes": true, 00:38:59.966 "zcopy": false, 00:38:59.967 "get_zone_info": false, 00:38:59.967 "zone_management": false, 00:38:59.967 "zone_append": false, 00:38:59.967 "compare": true, 00:38:59.967 "compare_and_write": true, 00:38:59.967 "abort": true, 00:38:59.967 "seek_hole": false, 00:38:59.967 "seek_data": false, 00:38:59.967 "copy": true, 00:38:59.967 "nvme_iov_md": false 00:38:59.967 }, 00:38:59.967 "memory_domains": [ 00:38:59.967 { 00:38:59.967 "dma_device_id": "system", 00:38:59.967 "dma_device_type": 1 00:38:59.967 } 00:38:59.967 ], 00:38:59.967 "driver_specific": { 00:38:59.967 "nvme": [ 00:38:59.967 { 00:38:59.967 "trid": { 00:38:59.967 "trtype": "TCP", 00:38:59.967 "adrfam": "IPv4", 00:38:59.967 "traddr": "10.0.0.2", 00:38:59.967 "trsvcid": "4420", 00:38:59.967 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:59.967 }, 00:38:59.967 "ctrlr_data": { 00:38:59.967 "cntlid": 1, 00:38:59.967 "vendor_id": "0x8086", 00:38:59.967 "model_number": "SPDK bdev Controller", 00:38:59.967 "serial_number": "SPDK0", 00:38:59.967 "firmware_revision": "25.01", 00:38:59.967 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:59.967 "oacs": { 00:38:59.967 "security": 0, 00:38:59.967 "format": 0, 00:38:59.967 "firmware": 0, 00:38:59.967 "ns_manage": 0 00:38:59.967 }, 00:38:59.967 "multi_ctrlr": true, 00:38:59.967 "ana_reporting": false 00:38:59.967 }, 00:38:59.967 "vs": { 00:38:59.967 "nvme_version": "1.3" 00:38:59.967 }, 00:38:59.967 "ns_data": { 00:38:59.967 "id": 1, 00:38:59.967 "can_share": true 00:38:59.967 } 00:38:59.967 } 00:38:59.967 ], 00:38:59.967 "mp_policy": "active_passive" 00:38:59.967 } 00:38:59.967 } 00:38:59.967 ] 00:38:59.967 03:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=406685 00:38:59.967 03:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:59.967 03:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:59.967 Running I/O for 10 seconds... 00:39:00.900 Latency(us) 00:39:00.900 [2024-12-14T02:20:16.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:00.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:00.900 Nvme0n1 : 1.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:39:00.900 [2024-12-14T02:20:16.033Z] =================================================================================================================== 00:39:00.900 [2024-12-14T02:20:16.033Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:39:00.900 00:39:01.835 03:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4c319b21-4fc8-4b85-8822-c0c525f0c5d7 00:39:02.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:02.093 Nvme0n1 : 2.00 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:39:02.093 [2024-12-14T02:20:17.226Z] =================================================================================================================== 00:39:02.093 [2024-12-14T02:20:17.226Z] Total : 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:39:02.093 00:39:02.093 true 00:39:02.093 03:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c319b21-4fc8-4b85-8822-c0c525f0c5d7 00:39:02.093 03:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:02.351 03:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:02.351 03:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:02.351 03:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 406685 00:39:02.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:02.918 Nvme0n1 : 3.00 23495.00 91.78 0.00 0.00 0.00 0.00 0.00 00:39:02.918 [2024-12-14T02:20:18.051Z] =================================================================================================================== 00:39:02.918 [2024-12-14T02:20:18.051Z] Total : 23495.00 91.78 0.00 0.00 0.00 0.00 0.00 00:39:02.918 00:39:03.854 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:03.854 Nvme0n1 : 4.00 23590.25 92.15 0.00 0.00 0.00 0.00 0.00 00:39:03.854 [2024-12-14T02:20:18.987Z] =================================================================================================================== 00:39:03.854 [2024-12-14T02:20:18.987Z] Total : 23590.25 92.15 0.00 0.00 0.00 0.00 0.00 00:39:03.854 00:39:05.230 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:05.230 Nvme0n1 : 5.00 23647.40 92.37 0.00 0.00 0.00 0.00 0.00 00:39:05.231 [2024-12-14T02:20:20.364Z] =================================================================================================================== 00:39:05.231 [2024-12-14T02:20:20.364Z] Total : 23647.40 92.37 0.00 0.00 0.00 0.00 0.00 00:39:05.231 00:39:06.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:06.166 Nvme0n1 : 6.00 23643.17 92.36 0.00 0.00 0.00 0.00 0.00 00:39:06.166 [2024-12-14T02:20:21.299Z] =================================================================================================================== 00:39:06.166 [2024-12-14T02:20:21.299Z] Total : 23643.17 92.36 0.00 0.00 0.00 0.00 0.00 00:39:06.166 00:39:07.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:07.102 Nvme0n1 : 7.00 23685.57 92.52 0.00 0.00 0.00 0.00 0.00 00:39:07.102 [2024-12-14T02:20:22.235Z] =================================================================================================================== 00:39:07.102 [2024-12-14T02:20:22.235Z] Total : 23685.57 92.52 0.00 0.00 0.00 0.00 0.00 00:39:07.102 00:39:08.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:08.038 Nvme0n1 : 8.00 23644.00 92.36 0.00 0.00 0.00 0.00 0.00 00:39:08.038 [2024-12-14T02:20:23.171Z] =================================================================================================================== 00:39:08.038 [2024-12-14T02:20:23.171Z] Total : 23644.00 92.36 0.00 0.00 0.00 0.00 0.00 00:39:08.038 00:39:08.973 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:08.973 Nvme0n1 : 9.00 23669.78 92.46 0.00 0.00 0.00 0.00 0.00 00:39:08.973 [2024-12-14T02:20:24.106Z] =================================================================================================================== 00:39:08.973 [2024-12-14T02:20:24.106Z] Total : 23669.78 92.46 0.00 0.00 0.00 0.00 0.00 00:39:08.973 00:39:09.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:09.909 Nvme0n1 : 10.00 23703.10 92.59 0.00 0.00 0.00 0.00 0.00 00:39:09.909 [2024-12-14T02:20:25.042Z] =================================================================================================================== 00:39:09.909 [2024-12-14T02:20:25.042Z] Total : 23703.10 92.59 0.00 0.00 0.00 0.00 0.00 00:39:09.909 00:39:09.909 00:39:09.909 Latency(us) 00:39:09.909 [2024-12-14T02:20:25.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:09.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:09.909 Nvme0n1 : 10.01 23703.61 92.59 0.00 0.00 5397.12 3105.16 30084.14 00:39:09.909 [2024-12-14T02:20:25.042Z] =================================================================================================================== 00:39:09.909 [2024-12-14T02:20:25.042Z] Total : 23703.61 92.59 0.00 0.00 5397.12 3105.16 30084.14 00:39:09.909 { 00:39:09.909 "results": [ 00:39:09.909 { 00:39:09.909 "job": "Nvme0n1", 00:39:09.909 "core_mask": "0x2", 00:39:09.909 "workload": "randwrite", 00:39:09.909 "status": "finished", 00:39:09.909 "queue_depth": 128, 00:39:09.909 "io_size": 4096, 00:39:09.909 "runtime": 10.005184, 00:39:09.909 "iops": 23703.61204751457, 00:39:09.909 "mibps": 92.59223456060379, 00:39:09.909 "io_failed": 0, 00:39:09.909 "io_timeout": 0, 00:39:09.909 "avg_latency_us": 5397.119530827119, 00:39:09.909 "min_latency_us": 3105.158095238095, 00:39:09.909 "max_latency_us": 30084.14476190476 00:39:09.909 } 00:39:09.909 ], 00:39:09.909 "core_count": 1 00:39:09.909 } 00:39:09.909 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 406669 00:39:09.909 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 406669 ']' 00:39:09.909 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 406669 00:39:09.909 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:39:09.909 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:09.909 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 406669 00:39:10.175 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:10.175 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:10.175 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 406669' 00:39:10.175 killing process with pid 406669 00:39:10.175 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 406669 00:39:10.175 Received shutdown signal, test time was about 10.000000 seconds 00:39:10.175 00:39:10.175 Latency(us) 00:39:10.175 [2024-12-14T02:20:25.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:10.175 [2024-12-14T02:20:25.308Z] =================================================================================================================== 00:39:10.175 [2024-12-14T02:20:25.308Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:10.175 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 406669 00:39:10.175 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:10.438 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:10.696 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c319b21-4fc8-4b85-8822-c0c525f0c5d7 00:39:10.696 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:10.697 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:10.697 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:39:10.697 03:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:10.955 [2024-12-14 03:20:25.988122] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:10.955 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c319b21-4fc8-4b85-8822-c0c525f0c5d7 00:39:10.955 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:39:10.955 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c319b21-4fc8-4b85-8822-c0c525f0c5d7 00:39:10.955 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:10.955 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:10.955 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:10.955 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:10.955 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:10.955 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:10.955 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:10.955 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:10.955 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c319b21-4fc8-4b85-8822-c0c525f0c5d7 00:39:11.213 request: 00:39:11.213 { 00:39:11.213 "uuid": "4c319b21-4fc8-4b85-8822-c0c525f0c5d7", 00:39:11.213 "method": "bdev_lvol_get_lvstores", 00:39:11.213 "req_id": 1 00:39:11.213 } 00:39:11.213 Got JSON-RPC error response 00:39:11.213 response: 00:39:11.213 { 00:39:11.213 "code": -19, 00:39:11.213 "message": "No such device" 00:39:11.213 } 00:39:11.213 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:39:11.213 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:11.213 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:11.213 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:11.214 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:11.472 aio_bdev 00:39:11.472 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6e115f68-d3f0-4d2d-9556-e1ee5d77f8ad 00:39:11.472 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=6e115f68-d3f0-4d2d-9556-e1ee5d77f8ad 00:39:11.472 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:11.472 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:39:11.472 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:11.472 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:11.472 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:11.472 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6e115f68-d3f0-4d2d-9556-e1ee5d77f8ad -t 2000 00:39:11.731 [ 00:39:11.731 { 00:39:11.731 "name": "6e115f68-d3f0-4d2d-9556-e1ee5d77f8ad", 00:39:11.731 "aliases": [ 00:39:11.731 "lvs/lvol" 00:39:11.731 ], 00:39:11.731 "product_name": "Logical Volume", 00:39:11.731 "block_size": 4096, 00:39:11.731 "num_blocks": 38912, 00:39:11.731 "uuid": "6e115f68-d3f0-4d2d-9556-e1ee5d77f8ad", 00:39:11.731 "assigned_rate_limits": { 00:39:11.731 "rw_ios_per_sec": 0, 00:39:11.731 "rw_mbytes_per_sec": 0, 00:39:11.731 "r_mbytes_per_sec": 0, 00:39:11.731 "w_mbytes_per_sec": 0 00:39:11.731 }, 00:39:11.731 "claimed": false, 00:39:11.731 "zoned": false, 00:39:11.731 "supported_io_types": { 00:39:11.731 "read": true, 00:39:11.731 "write": true, 00:39:11.731 "unmap": true, 00:39:11.731 "flush": false, 00:39:11.731 "reset": true, 00:39:11.731 "nvme_admin": false, 00:39:11.731 "nvme_io": false, 00:39:11.731 "nvme_io_md": false, 00:39:11.731 "write_zeroes": true, 00:39:11.731 "zcopy": false, 00:39:11.731 "get_zone_info": false, 00:39:11.731 "zone_management": false, 00:39:11.731 "zone_append": false, 00:39:11.731 "compare": false, 00:39:11.731 "compare_and_write": false, 00:39:11.731 "abort": false, 00:39:11.731 "seek_hole": true, 00:39:11.731 "seek_data": true, 00:39:11.731 "copy": false, 00:39:11.731 "nvme_iov_md": false 00:39:11.731 }, 00:39:11.731 "driver_specific": { 00:39:11.731 "lvol": { 00:39:11.731 "lvol_store_uuid": "4c319b21-4fc8-4b85-8822-c0c525f0c5d7", 00:39:11.731 "base_bdev": "aio_bdev", 00:39:11.731 "thin_provision": false, 00:39:11.731 "num_allocated_clusters": 38, 00:39:11.731 "snapshot": false, 00:39:11.731 "clone": false, 00:39:11.731 "esnap_clone": false 00:39:11.731 } 00:39:11.731 } 00:39:11.731 } 00:39:11.731 ] 00:39:11.731 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:39:11.731 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:11.731 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c319b21-4fc8-4b85-8822-c0c525f0c5d7 00:39:11.990 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:11.990 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c319b21-4fc8-4b85-8822-c0c525f0c5d7 00:39:11.990 03:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:12.248 03:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:12.248 03:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6e115f68-d3f0-4d2d-9556-e1ee5d77f8ad 00:39:12.248 03:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4c319b21-4fc8-4b85-8822-c0c525f0c5d7 00:39:12.506 03:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:12.765 03:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:12.765 00:39:12.765 real 0m15.514s 00:39:12.765 user 0m15.117s 00:39:12.765 sys 0m1.465s 00:39:12.765 03:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:12.765 03:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:12.765 ************************************ 00:39:12.765 END TEST lvs_grow_clean 00:39:12.765 ************************************ 00:39:12.765 03:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:39:12.765 03:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:12.765 03:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:12.765 03:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:12.765 ************************************ 00:39:12.765 START TEST lvs_grow_dirty 00:39:12.765 ************************************ 00:39:12.765 03:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:39:12.765 03:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:12.765 03:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:12.765 03:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:12.765 03:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:12.765 03:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:12.765 03:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:12.765 03:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:12.765 03:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:12.765 03:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:13.024 03:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:13.024 03:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:13.282 03:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4ba16420-3e91-4507-baec-fbafb385b7b7 00:39:13.282 03:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ba16420-3e91-4507-baec-fbafb385b7b7 00:39:13.282 03:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:13.541 03:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:13.541 03:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:13.541 03:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4ba16420-3e91-4507-baec-fbafb385b7b7 lvol 150 00:39:13.541 03:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=92cbf20c-bc28-4e18-82e1-a24b46b60fef 00:39:13.541 03:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:13.541 03:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:13.800 [2024-12-14 03:20:28.796057] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:13.800 [2024-12-14 03:20:28.796181] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:13.800 true 00:39:13.800 03:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ba16420-3e91-4507-baec-fbafb385b7b7 00:39:13.800 03:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:14.058 03:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:14.058 03:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:14.317 03:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 92cbf20c-bc28-4e18-82e1-a24b46b60fef 00:39:14.317 03:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:14.575 [2024-12-14 03:20:29.544482] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:14.575 03:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:14.834 03:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:14.834 03:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=406920 00:39:14.834 03:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:14.834 03:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 406920 /var/tmp/bdevperf.sock 00:39:14.834 03:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 406920 ']' 00:39:14.834 03:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:14.834 03:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:14.834 03:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:14.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:14.834 03:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:14.834 03:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:14.834 [2024-12-14 03:20:29.774768] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:14.834 [2024-12-14 03:20:29.774813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406920 ] 00:39:14.834 [2024-12-14 03:20:29.845974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:14.834 [2024-12-14 03:20:29.867689] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:14.834 03:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:14.834 03:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:39:14.834 03:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:15.092 Nvme0n1 00:39:15.092 03:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:15.351 [ 00:39:15.351 { 00:39:15.351 "name": "Nvme0n1", 00:39:15.351 "aliases": [ 00:39:15.351 "92cbf20c-bc28-4e18-82e1-a24b46b60fef" 00:39:15.351 ], 00:39:15.351 "product_name": "NVMe disk", 00:39:15.351 "block_size": 4096, 00:39:15.351 "num_blocks": 38912, 00:39:15.351 "uuid": "92cbf20c-bc28-4e18-82e1-a24b46b60fef", 00:39:15.351 "numa_id": 1, 00:39:15.351 "assigned_rate_limits": { 00:39:15.351 "rw_ios_per_sec": 0, 00:39:15.351 "rw_mbytes_per_sec": 0, 00:39:15.351 "r_mbytes_per_sec": 0, 00:39:15.351 "w_mbytes_per_sec": 0 00:39:15.351 }, 00:39:15.351 "claimed": false, 00:39:15.351 "zoned": false, 00:39:15.351 "supported_io_types": { 00:39:15.351 "read": true, 00:39:15.351 "write": true, 00:39:15.351 "unmap": true, 00:39:15.351 "flush": true, 00:39:15.351 "reset": true, 00:39:15.351 "nvme_admin": true, 00:39:15.351 "nvme_io": true, 00:39:15.351 "nvme_io_md": false, 00:39:15.351 "write_zeroes": true, 00:39:15.351 "zcopy": false, 00:39:15.351 "get_zone_info": false, 00:39:15.351 "zone_management": false, 00:39:15.351 "zone_append": false, 00:39:15.351 "compare": true, 00:39:15.351 "compare_and_write": true, 00:39:15.351 "abort": true, 00:39:15.351 "seek_hole": false, 00:39:15.351 "seek_data": false, 00:39:15.351 "copy": true, 00:39:15.351 "nvme_iov_md": false 00:39:15.351 }, 00:39:15.351 "memory_domains": [ 00:39:15.351 { 00:39:15.351 "dma_device_id": "system", 00:39:15.351 "dma_device_type": 1 00:39:15.351 } 00:39:15.351 ], 00:39:15.351 "driver_specific": { 00:39:15.351 "nvme": [ 00:39:15.351 { 00:39:15.351 "trid": { 00:39:15.351 "trtype": "TCP", 00:39:15.351 "adrfam": "IPv4", 00:39:15.351 "traddr": "10.0.0.2", 00:39:15.351 "trsvcid": "4420", 00:39:15.351 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:15.351 }, 00:39:15.351 "ctrlr_data": { 00:39:15.351 "cntlid": 1, 00:39:15.351 "vendor_id": "0x8086", 00:39:15.351 "model_number": "SPDK bdev Controller", 00:39:15.351 "serial_number": "SPDK0", 00:39:15.351 "firmware_revision": "25.01", 00:39:15.351 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:15.351 "oacs": { 00:39:15.351 "security": 0, 00:39:15.351 "format": 0, 00:39:15.351 "firmware": 0, 00:39:15.351 "ns_manage": 0 00:39:15.351 }, 00:39:15.351 "multi_ctrlr": true, 00:39:15.351 "ana_reporting": false 00:39:15.351 }, 00:39:15.351 "vs": { 00:39:15.351 "nvme_version": "1.3" 00:39:15.351 }, 00:39:15.351 "ns_data": { 00:39:15.351 "id": 1, 00:39:15.351 "can_share": true 00:39:15.351 } 00:39:15.351 } 00:39:15.351 ], 00:39:15.351 "mp_policy": "active_passive" 00:39:15.351 } 00:39:15.351 } 00:39:15.351 ] 00:39:15.351 03:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=406938 00:39:15.351 03:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:15.351 03:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:15.351 Running I/O for 10 seconds... 00:39:16.726 Latency(us) 00:39:16.726 [2024-12-14T02:20:31.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:16.726 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:16.726 Nvme0n1 : 1.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:39:16.726 [2024-12-14T02:20:31.859Z] =================================================================================================================== 00:39:16.726 [2024-12-14T02:20:31.859Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:39:16.726 00:39:17.293 03:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4ba16420-3e91-4507-baec-fbafb385b7b7 00:39:17.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:17.551 Nvme0n1 : 2.00 23050.50 90.04 0.00 0.00 0.00 0.00 0.00 00:39:17.551 [2024-12-14T02:20:32.684Z] =================================================================================================================== 00:39:17.551 [2024-12-14T02:20:32.684Z] Total : 23050.50 90.04 0.00 0.00 0.00 0.00 0.00 00:39:17.551 00:39:17.551 true 00:39:17.551 03:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ba16420-3e91-4507-baec-fbafb385b7b7 00:39:17.552 03:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:17.810 03:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:17.810 03:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:17.810 03:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 406938 00:39:18.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:18.376 Nvme0n1 : 3.00 23198.67 90.62 0.00 0.00 0.00 0.00 0.00 00:39:18.376 [2024-12-14T02:20:33.509Z] =================================================================================================================== 00:39:18.376 [2024-12-14T02:20:33.509Z] Total : 23198.67 90.62 0.00 0.00 0.00 0.00 0.00 00:39:18.376 00:39:19.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:19.752 Nvme0n1 : 4.00 23277.00 90.93 0.00 0.00 0.00 0.00 0.00 00:39:19.752 [2024-12-14T02:20:34.885Z] =================================================================================================================== 00:39:19.752 [2024-12-14T02:20:34.885Z] Total : 23277.00 90.93 0.00 0.00 0.00 0.00 0.00 00:39:19.752 00:39:20.686 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:20.686 Nvme0n1 : 5.00 23346.00 91.20 0.00 0.00 0.00 0.00 0.00 00:39:20.686 [2024-12-14T02:20:35.819Z] =================================================================================================================== 00:39:20.686 [2024-12-14T02:20:35.819Z] Total : 23346.00 91.20 0.00 0.00 0.00 0.00 0.00 00:39:20.686 00:39:21.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:21.620 Nvme0n1 : 6.00 23392.00 91.38 0.00 0.00 0.00 0.00 0.00 00:39:21.620 [2024-12-14T02:20:36.753Z] =================================================================================================================== 00:39:21.620 [2024-12-14T02:20:36.753Z] Total : 23392.00 91.38 0.00 0.00 0.00 0.00 0.00 00:39:21.620 00:39:22.555 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:22.555 Nvme0n1 : 7.00 23424.86 91.50 0.00 0.00 0.00 0.00 0.00 00:39:22.555 [2024-12-14T02:20:37.688Z] =================================================================================================================== 00:39:22.555 [2024-12-14T02:20:37.688Z] Total : 23424.86 91.50 0.00 0.00 0.00 0.00 0.00 00:39:22.555 00:39:23.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:23.489 Nvme0n1 : 8.00 23457.50 91.63 0.00 0.00 0.00 0.00 0.00 00:39:23.489 [2024-12-14T02:20:38.622Z] =================================================================================================================== 00:39:23.489 [2024-12-14T02:20:38.622Z] Total : 23457.50 91.63 0.00 0.00 0.00 0.00 0.00 00:39:23.489 00:39:24.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:24.424 Nvme0n1 : 9.00 23481.22 91.72 0.00 0.00 0.00 0.00 0.00 00:39:24.424 [2024-12-14T02:20:39.557Z] =================================================================================================================== 00:39:24.424 [2024-12-14T02:20:39.557Z] Total : 23481.22 91.72 0.00 0.00 0.00 0.00 0.00 00:39:24.424 00:39:25.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:25.800 Nvme0n1 : 10.00 23479.60 91.72 0.00 0.00 0.00 0.00 0.00 00:39:25.800 [2024-12-14T02:20:40.933Z] =================================================================================================================== 00:39:25.800 [2024-12-14T02:20:40.933Z] Total : 23479.60 91.72 0.00 0.00 0.00 0.00 0.00 00:39:25.800 00:39:25.800 00:39:25.800 Latency(us) 00:39:25.801 [2024-12-14T02:20:40.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:25.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:25.801 Nvme0n1 : 10.00 23472.22 91.69 0.00 0.00 5449.51 3183.18 26214.40 00:39:25.801 [2024-12-14T02:20:40.934Z] =================================================================================================================== 00:39:25.801 [2024-12-14T02:20:40.934Z] Total : 23472.22 91.69 0.00 0.00 5449.51 3183.18 26214.40 00:39:25.801 { 00:39:25.801 "results": [ 00:39:25.801 { 00:39:25.801 "job": "Nvme0n1", 00:39:25.801 "core_mask": "0x2", 00:39:25.801 "workload": "randwrite", 00:39:25.801 "status": "finished", 00:39:25.801 "queue_depth": 128, 00:39:25.801 "io_size": 4096, 00:39:25.801 "runtime": 10.003187, 00:39:25.801 "iops": 23472.21940367605, 00:39:25.801 "mibps": 91.68835704560956, 00:39:25.801 "io_failed": 0, 00:39:25.801 "io_timeout": 0, 00:39:25.801 "avg_latency_us": 5449.51053864767, 00:39:25.801 "min_latency_us": 3183.177142857143, 00:39:25.801 "max_latency_us": 26214.4 00:39:25.801 } 00:39:25.801 ], 00:39:25.801 "core_count": 1 00:39:25.801 } 00:39:25.801 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 406920 00:39:25.801 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 406920 ']' 00:39:25.801 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 406920 00:39:25.801 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:39:25.801 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:25.801 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 406920 00:39:25.801 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:25.801 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:25.801 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 406920' 00:39:25.801 killing process with pid 406920 00:39:25.801 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 406920 00:39:25.801 Received shutdown signal, test time was about 10.000000 seconds 00:39:25.801 00:39:25.801 Latency(us) 00:39:25.801 [2024-12-14T02:20:40.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:25.801 [2024-12-14T02:20:40.934Z] =================================================================================================================== 00:39:25.801 [2024-12-14T02:20:40.934Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:25.801 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 406920 00:39:25.801 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:25.801 03:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:26.059 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:26.059 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ba16420-3e91-4507-baec-fbafb385b7b7 00:39:26.318 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:26.318 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:39:26.318 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 406596 00:39:26.318 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 406596 00:39:26.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 406596 Killed "${NVMF_APP[@]}" "$@" 00:39:26.318 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:39:26.318 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:39:26.318 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:26.318 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:26.318 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:26.318 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=407071 00:39:26.318 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 407071 00:39:26.318 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:39:26.318 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 407071 ']' 00:39:26.319 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:26.319 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:26.319 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:26.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:26.319 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:26.319 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:26.319 [2024-12-14 03:20:41.416471] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:26.319 [2024-12-14 03:20:41.417346] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:26.319 [2024-12-14 03:20:41.417379] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:26.576 [2024-12-14 03:20:41.495328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:26.576 [2024-12-14 03:20:41.516219] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:26.576 [2024-12-14 03:20:41.516253] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:26.576 [2024-12-14 03:20:41.516260] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:26.576 [2024-12-14 03:20:41.516266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:26.576 [2024-12-14 03:20:41.516271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:26.576 [2024-12-14 03:20:41.516738] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:26.576 [2024-12-14 03:20:41.578869] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:26.576 [2024-12-14 03:20:41.579062] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:26.577 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:26.577 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:39:26.577 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:26.577 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:26.577 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:26.577 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:26.577 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:26.834 [2024-12-14 03:20:41.810109] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:39:26.834 [2024-12-14 03:20:41.810303] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:39:26.834 [2024-12-14 03:20:41.810402] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:39:26.834 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:39:26.834 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 92cbf20c-bc28-4e18-82e1-a24b46b60fef 00:39:26.834 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=92cbf20c-bc28-4e18-82e1-a24b46b60fef 00:39:26.834 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:26.834 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:39:26.835 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:26.835 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:26.835 03:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:27.093 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 92cbf20c-bc28-4e18-82e1-a24b46b60fef -t 2000 00:39:27.093 [ 00:39:27.093 { 00:39:27.093 "name": "92cbf20c-bc28-4e18-82e1-a24b46b60fef", 00:39:27.093 "aliases": [ 00:39:27.093 "lvs/lvol" 00:39:27.093 ], 00:39:27.093 "product_name": "Logical Volume", 00:39:27.093 "block_size": 4096, 00:39:27.093 "num_blocks": 38912, 00:39:27.093 "uuid": "92cbf20c-bc28-4e18-82e1-a24b46b60fef", 00:39:27.093 "assigned_rate_limits": { 00:39:27.093 "rw_ios_per_sec": 0, 00:39:27.093 "rw_mbytes_per_sec": 0, 00:39:27.093 "r_mbytes_per_sec": 0, 00:39:27.093 "w_mbytes_per_sec": 0 00:39:27.093 }, 00:39:27.093 "claimed": false, 00:39:27.093 "zoned": false, 00:39:27.093 "supported_io_types": { 00:39:27.094 "read": true, 00:39:27.094 "write": true, 00:39:27.094 "unmap": true, 00:39:27.094 "flush": false, 00:39:27.094 "reset": true, 00:39:27.094 "nvme_admin": false, 00:39:27.094 "nvme_io": false, 00:39:27.094 "nvme_io_md": false, 00:39:27.094 "write_zeroes": true, 00:39:27.094 "zcopy": false, 00:39:27.094 "get_zone_info": false, 00:39:27.094 "zone_management": false, 00:39:27.094 "zone_append": false, 00:39:27.094 "compare": false, 00:39:27.094 "compare_and_write": false, 00:39:27.094 "abort": false, 00:39:27.094 "seek_hole": true, 00:39:27.094 "seek_data": true, 00:39:27.094 "copy": false, 00:39:27.094 "nvme_iov_md": false 00:39:27.094 }, 00:39:27.094 "driver_specific": { 00:39:27.094 "lvol": { 00:39:27.094 "lvol_store_uuid": "4ba16420-3e91-4507-baec-fbafb385b7b7", 00:39:27.094 "base_bdev": "aio_bdev", 00:39:27.094 "thin_provision": false, 00:39:27.094 "num_allocated_clusters": 38, 00:39:27.094 "snapshot": false, 00:39:27.094 "clone": false, 00:39:27.094 "esnap_clone": false 00:39:27.094 } 00:39:27.094 } 00:39:27.094 } 00:39:27.094 ] 00:39:27.094 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:39:27.094 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:39:27.094 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ba16420-3e91-4507-baec-fbafb385b7b7 00:39:27.352 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:39:27.352 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ba16420-3e91-4507-baec-fbafb385b7b7 00:39:27.352 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:39:27.611 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:39:27.611 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:27.870 [2024-12-14 03:20:42.773184] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:27.870 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ba16420-3e91-4507-baec-fbafb385b7b7 00:39:27.870 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:39:27.870 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ba16420-3e91-4507-baec-fbafb385b7b7 00:39:27.870 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:27.870 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:27.870 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:27.870 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:27.870 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:27.870 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:27.870 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:27.870 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:27.870 03:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ba16420-3e91-4507-baec-fbafb385b7b7 00:39:27.870 request: 00:39:27.870 { 00:39:27.870 "uuid": "4ba16420-3e91-4507-baec-fbafb385b7b7", 00:39:27.870 "method": "bdev_lvol_get_lvstores", 00:39:27.870 "req_id": 1 00:39:27.870 } 00:39:27.870 Got JSON-RPC error response 00:39:27.870 response: 00:39:27.870 { 00:39:27.870 "code": -19, 00:39:27.870 "message": "No such device" 00:39:27.870 } 00:39:28.129 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:39:28.129 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:28.129 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:28.129 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:28.129 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:28.129 aio_bdev 00:39:28.129 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 92cbf20c-bc28-4e18-82e1-a24b46b60fef 00:39:28.129 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=92cbf20c-bc28-4e18-82e1-a24b46b60fef 00:39:28.129 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:28.129 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:39:28.129 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:28.129 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:28.129 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:28.388 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 92cbf20c-bc28-4e18-82e1-a24b46b60fef -t 2000 00:39:28.646 [ 00:39:28.646 { 00:39:28.646 "name": "92cbf20c-bc28-4e18-82e1-a24b46b60fef", 00:39:28.646 "aliases": [ 00:39:28.646 "lvs/lvol" 00:39:28.646 ], 00:39:28.646 "product_name": "Logical Volume", 00:39:28.646 "block_size": 4096, 00:39:28.646 "num_blocks": 38912, 00:39:28.646 "uuid": "92cbf20c-bc28-4e18-82e1-a24b46b60fef", 00:39:28.646 "assigned_rate_limits": { 00:39:28.646 "rw_ios_per_sec": 0, 00:39:28.646 "rw_mbytes_per_sec": 0, 00:39:28.646 "r_mbytes_per_sec": 0, 00:39:28.646 "w_mbytes_per_sec": 0 00:39:28.646 }, 00:39:28.646 "claimed": false, 00:39:28.646 "zoned": false, 00:39:28.646 "supported_io_types": { 00:39:28.646 "read": true, 00:39:28.646 "write": true, 00:39:28.646 "unmap": true, 00:39:28.646 "flush": false, 00:39:28.646 "reset": true, 00:39:28.646 "nvme_admin": false, 00:39:28.647 "nvme_io": false, 00:39:28.647 "nvme_io_md": false, 00:39:28.647 "write_zeroes": true, 00:39:28.647 "zcopy": false, 00:39:28.647 "get_zone_info": false, 00:39:28.647 "zone_management": false, 00:39:28.647 "zone_append": false, 00:39:28.647 "compare": false, 00:39:28.647 "compare_and_write": false, 00:39:28.647 "abort": false, 00:39:28.647 "seek_hole": true, 00:39:28.647 "seek_data": true, 00:39:28.647 "copy": false, 00:39:28.647 "nvme_iov_md": false 00:39:28.647 }, 00:39:28.647 "driver_specific": { 00:39:28.647 "lvol": { 00:39:28.647 "lvol_store_uuid": "4ba16420-3e91-4507-baec-fbafb385b7b7", 00:39:28.647 "base_bdev": "aio_bdev", 00:39:28.647 "thin_provision": false, 00:39:28.647 "num_allocated_clusters": 38, 00:39:28.647 "snapshot": false, 00:39:28.647 "clone": false, 00:39:28.647 "esnap_clone": false 00:39:28.647 } 00:39:28.647 } 00:39:28.647 } 00:39:28.647 ] 00:39:28.647 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:39:28.647 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ba16420-3e91-4507-baec-fbafb385b7b7 00:39:28.647 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:28.647 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:28.647 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ba16420-3e91-4507-baec-fbafb385b7b7 00:39:28.647 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:28.905 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:28.905 03:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 92cbf20c-bc28-4e18-82e1-a24b46b60fef 00:39:29.164 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4ba16420-3e91-4507-baec-fbafb385b7b7 00:39:29.422 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:29.682 00:39:29.682 real 0m16.778s 00:39:29.682 user 0m34.310s 00:39:29.682 sys 0m3.728s 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:29.682 ************************************ 00:39:29.682 END TEST lvs_grow_dirty 00:39:29.682 ************************************ 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:39:29.682 nvmf_trace.0 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:29.682 rmmod nvme_tcp 00:39:29.682 rmmod nvme_fabrics 00:39:29.682 rmmod nvme_keyring 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 407071 ']' 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 407071 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 407071 ']' 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 407071 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 407071 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 407071' 00:39:29.682 killing process with pid 407071 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 407071 00:39:29.682 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 407071 00:39:29.941 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:29.941 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:29.941 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:29.941 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:39:29.941 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:39:29.941 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:29.941 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:39:29.941 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:29.941 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:29.941 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:29.941 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:29.941 03:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:31.974 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:31.974 00:39:31.974 real 0m41.360s 00:39:31.974 user 0m51.893s 00:39:31.974 sys 0m10.015s 00:39:31.974 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:31.974 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:31.974 ************************************ 00:39:31.974 END TEST nvmf_lvs_grow 00:39:31.974 ************************************ 00:39:31.974 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:39:31.974 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:31.974 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:31.974 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:32.243 ************************************ 00:39:32.243 START TEST nvmf_bdev_io_wait 00:39:32.243 ************************************ 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:39:32.243 * Looking for test storage... 00:39:32.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:32.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:32.243 --rc genhtml_branch_coverage=1 00:39:32.243 --rc genhtml_function_coverage=1 00:39:32.243 --rc genhtml_legend=1 00:39:32.243 --rc geninfo_all_blocks=1 00:39:32.243 --rc geninfo_unexecuted_blocks=1 00:39:32.243 00:39:32.243 ' 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:32.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:32.243 --rc genhtml_branch_coverage=1 00:39:32.243 --rc genhtml_function_coverage=1 00:39:32.243 --rc genhtml_legend=1 00:39:32.243 --rc geninfo_all_blocks=1 00:39:32.243 --rc geninfo_unexecuted_blocks=1 00:39:32.243 00:39:32.243 ' 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:32.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:32.243 --rc genhtml_branch_coverage=1 00:39:32.243 --rc genhtml_function_coverage=1 00:39:32.243 --rc genhtml_legend=1 00:39:32.243 --rc geninfo_all_blocks=1 00:39:32.243 --rc geninfo_unexecuted_blocks=1 00:39:32.243 00:39:32.243 ' 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:32.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:32.243 --rc genhtml_branch_coverage=1 00:39:32.243 --rc genhtml_function_coverage=1 00:39:32.243 --rc genhtml_legend=1 00:39:32.243 --rc geninfo_all_blocks=1 00:39:32.243 --rc geninfo_unexecuted_blocks=1 00:39:32.243 00:39:32.243 ' 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.243 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:39:32.244 03:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:38.815 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:38.816 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:38.816 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:38.816 Found net devices under 0000:af:00.0: cvl_0_0 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:38.816 Found net devices under 0000:af:00.1: cvl_0_1 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:38.816 03:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:38.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:38.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:39:38.816 00:39:38.816 --- 10.0.0.2 ping statistics --- 00:39:38.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:38.816 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:38.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:38.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:39:38.816 00:39:38.816 --- 10.0.0.1 ping statistics --- 00:39:38.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:38.816 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=409415 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 409415 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 409415 ']' 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:38.816 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:38.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:38.817 [2024-12-14 03:20:53.143630] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:38.817 [2024-12-14 03:20:53.144525] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:38.817 [2024-12-14 03:20:53.144557] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:38.817 [2024-12-14 03:20:53.222619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:38.817 [2024-12-14 03:20:53.246292] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:38.817 [2024-12-14 03:20:53.246338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:38.817 [2024-12-14 03:20:53.246346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:38.817 [2024-12-14 03:20:53.246352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:38.817 [2024-12-14 03:20:53.246357] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:38.817 [2024-12-14 03:20:53.247617] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:38.817 [2024-12-14 03:20:53.247728] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:39:38.817 [2024-12-14 03:20:53.247770] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:38.817 [2024-12-14 03:20:53.247771] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:39:38.817 [2024-12-14 03:20:53.248165] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:38.817 [2024-12-14 03:20:53.388245] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:38.817 [2024-12-14 03:20:53.388791] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:38.817 [2024-12-14 03:20:53.388907] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:38.817 [2024-12-14 03:20:53.389039] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:38.817 [2024-12-14 03:20:53.400442] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:38.817 Malloc0 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:38.817 [2024-12-14 03:20:53.472788] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=409440 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=409442 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:38.817 { 00:39:38.817 "params": { 00:39:38.817 "name": "Nvme$subsystem", 00:39:38.817 "trtype": "$TEST_TRANSPORT", 00:39:38.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:38.817 "adrfam": "ipv4", 00:39:38.817 "trsvcid": "$NVMF_PORT", 00:39:38.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:38.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:38.817 "hdgst": ${hdgst:-false}, 00:39:38.817 "ddgst": ${ddgst:-false} 00:39:38.817 }, 00:39:38.817 "method": "bdev_nvme_attach_controller" 00:39:38.817 } 00:39:38.817 EOF 00:39:38.817 )") 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=409444 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:38.817 { 00:39:38.817 "params": { 00:39:38.817 "name": "Nvme$subsystem", 00:39:38.817 "trtype": "$TEST_TRANSPORT", 00:39:38.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:38.817 "adrfam": "ipv4", 00:39:38.817 "trsvcid": "$NVMF_PORT", 00:39:38.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:38.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:38.817 "hdgst": ${hdgst:-false}, 00:39:38.817 "ddgst": ${ddgst:-false} 00:39:38.817 }, 00:39:38.817 "method": "bdev_nvme_attach_controller" 00:39:38.817 } 00:39:38.817 EOF 00:39:38.817 )") 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=409447 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:38.817 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:38.817 { 00:39:38.817 "params": { 00:39:38.817 "name": "Nvme$subsystem", 00:39:38.817 "trtype": "$TEST_TRANSPORT", 00:39:38.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:38.817 "adrfam": "ipv4", 00:39:38.817 "trsvcid": "$NVMF_PORT", 00:39:38.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:38.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:38.817 "hdgst": ${hdgst:-false}, 00:39:38.818 "ddgst": ${ddgst:-false} 00:39:38.818 }, 00:39:38.818 "method": "bdev_nvme_attach_controller" 00:39:38.818 } 00:39:38.818 EOF 00:39:38.818 )") 00:39:38.818 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:39:38.818 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:39:38.818 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:38.818 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:38.818 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:38.818 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:38.818 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:38.818 { 00:39:38.818 "params": { 00:39:38.818 "name": "Nvme$subsystem", 00:39:38.818 "trtype": "$TEST_TRANSPORT", 00:39:38.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:38.818 "adrfam": "ipv4", 00:39:38.818 "trsvcid": "$NVMF_PORT", 00:39:38.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:38.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:38.818 "hdgst": ${hdgst:-false}, 00:39:38.818 "ddgst": ${ddgst:-false} 00:39:38.818 }, 00:39:38.818 "method": "bdev_nvme_attach_controller" 00:39:38.818 } 00:39:38.818 EOF 00:39:38.818 )") 00:39:38.818 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:38.818 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 409440 00:39:38.818 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:38.818 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:38.818 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:38.818 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:38.818 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:38.818 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:38.818 "params": { 00:39:38.818 "name": "Nvme1", 00:39:38.818 "trtype": "tcp", 00:39:38.818 "traddr": "10.0.0.2", 00:39:38.818 "adrfam": "ipv4", 00:39:38.818 "trsvcid": "4420", 00:39:38.818 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:38.818 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:38.818 "hdgst": false, 00:39:38.818 "ddgst": false 00:39:38.818 }, 00:39:38.818 "method": "bdev_nvme_attach_controller" 00:39:38.818 }' 00:39:38.818 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:38.818 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:38.818 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:38.818 "params": { 00:39:38.818 "name": "Nvme1", 00:39:38.818 "trtype": "tcp", 00:39:38.818 "traddr": "10.0.0.2", 00:39:38.818 "adrfam": "ipv4", 00:39:38.818 "trsvcid": "4420", 00:39:38.818 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:38.818 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:38.818 "hdgst": false, 00:39:38.818 "ddgst": false 00:39:38.818 }, 00:39:38.818 "method": "bdev_nvme_attach_controller" 00:39:38.818 }' 00:39:38.818 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:38.818 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:38.818 "params": { 00:39:38.818 "name": "Nvme1", 00:39:38.818 "trtype": "tcp", 00:39:38.818 "traddr": "10.0.0.2", 00:39:38.818 "adrfam": "ipv4", 00:39:38.818 "trsvcid": "4420", 00:39:38.818 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:38.818 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:38.818 "hdgst": false, 00:39:38.818 "ddgst": false 00:39:38.818 }, 00:39:38.818 "method": "bdev_nvme_attach_controller" 00:39:38.818 }' 00:39:38.818 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:38.818 03:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:38.818 "params": { 00:39:38.818 "name": "Nvme1", 00:39:38.818 "trtype": "tcp", 00:39:38.818 "traddr": "10.0.0.2", 00:39:38.818 "adrfam": "ipv4", 00:39:38.818 "trsvcid": "4420", 00:39:38.818 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:38.818 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:38.818 "hdgst": false, 00:39:38.818 "ddgst": false 00:39:38.818 }, 00:39:38.818 "method": "bdev_nvme_attach_controller" 00:39:38.818 }' 00:39:38.818 [2024-12-14 03:20:53.524994] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:38.818 [2024-12-14 03:20:53.525044] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:39:38.818 [2024-12-14 03:20:53.525041] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:38.818 [2024-12-14 03:20:53.525080] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:39:38.818 [2024-12-14 03:20:53.526643] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:38.818 [2024-12-14 03:20:53.526659] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:38.818 [2024-12-14 03:20:53.526686] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:39:38.818 [2024-12-14 03:20:53.526697] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:39:38.818 [2024-12-14 03:20:53.718548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:38.818 [2024-12-14 03:20:53.736055] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:39:38.818 [2024-12-14 03:20:53.811426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:38.818 [2024-12-14 03:20:53.833566] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:39:38.818 [2024-12-14 03:20:53.862770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:38.818 [2024-12-14 03:20:53.878553] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:39:38.818 [2024-12-14 03:20:53.915032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:38.818 [2024-12-14 03:20:53.930820] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:39:39.078 Running I/O for 1 seconds... 00:39:39.078 Running I/O for 1 seconds... 00:39:39.078 Running I/O for 1 seconds... 00:39:39.078 Running I/O for 1 seconds... 00:39:40.015 8426.00 IOPS, 32.91 MiB/s 00:39:40.015 Latency(us) 00:39:40.015 [2024-12-14T02:20:55.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:40.015 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:39:40.015 Nvme1n1 : 1.02 8441.03 32.97 0.00 0.00 15045.87 3167.57 23093.64 00:39:40.015 [2024-12-14T02:20:55.148Z] =================================================================================================================== 00:39:40.015 [2024-12-14T02:20:55.148Z] Total : 8441.03 32.97 0.00 0.00 15045.87 3167.57 23093.64 00:39:40.015 242072.00 IOPS, 945.59 MiB/s 00:39:40.015 Latency(us) 00:39:40.015 [2024-12-14T02:20:55.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:40.015 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:39:40.015 Nvme1n1 : 1.00 241711.87 944.19 0.00 0.00 526.34 218.45 1490.16 00:39:40.015 [2024-12-14T02:20:55.148Z] =================================================================================================================== 00:39:40.015 [2024-12-14T02:20:55.148Z] Total : 241711.87 944.19 0.00 0.00 526.34 218.45 1490.16 00:39:40.015 7772.00 IOPS, 30.36 MiB/s 00:39:40.015 Latency(us) 00:39:40.015 [2024-12-14T02:20:55.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:40.015 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:39:40.015 Nvme1n1 : 1.01 7853.80 30.68 0.00 0.00 16247.47 5242.88 25090.93 00:39:40.015 [2024-12-14T02:20:55.148Z] =================================================================================================================== 00:39:40.015 [2024-12-14T02:20:55.148Z] Total : 7853.80 30.68 0.00 0.00 16247.47 5242.88 25090.93 00:39:40.274 12409.00 IOPS, 48.47 MiB/s 00:39:40.274 Latency(us) 00:39:40.274 [2024-12-14T02:20:55.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:40.274 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:39:40.274 Nvme1n1 : 1.01 12488.90 48.78 0.00 0.00 10224.51 1529.17 14917.24 00:39:40.274 [2024-12-14T02:20:55.407Z] =================================================================================================================== 00:39:40.274 [2024-12-14T02:20:55.407Z] Total : 12488.90 48.78 0.00 0.00 10224.51 1529.17 14917.24 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 409442 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 409444 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 409447 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:40.274 rmmod nvme_tcp 00:39:40.274 rmmod nvme_fabrics 00:39:40.274 rmmod nvme_keyring 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 409415 ']' 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 409415 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 409415 ']' 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 409415 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:40.274 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 409415 00:39:40.533 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:40.533 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:40.533 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 409415' 00:39:40.533 killing process with pid 409415 00:39:40.533 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 409415 00:39:40.533 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 409415 00:39:40.533 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:40.533 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:40.533 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:40.533 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:39:40.533 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:39:40.533 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:39:40.534 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:40.534 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:40.534 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:40.534 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:40.534 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:40.534 03:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:43.069 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:43.069 00:39:43.069 real 0m10.528s 00:39:43.069 user 0m14.542s 00:39:43.069 sys 0m6.327s 00:39:43.069 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:43.069 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:43.069 ************************************ 00:39:43.069 END TEST nvmf_bdev_io_wait 00:39:43.069 ************************************ 00:39:43.069 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:43.069 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:43.069 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:43.069 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:43.069 ************************************ 00:39:43.069 START TEST nvmf_queue_depth 00:39:43.069 ************************************ 00:39:43.069 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:43.069 * Looking for test storage... 00:39:43.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:43.069 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:43.069 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:39:43.069 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:43.069 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:43.069 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:43.069 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:43.069 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:43.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:43.070 --rc genhtml_branch_coverage=1 00:39:43.070 --rc genhtml_function_coverage=1 00:39:43.070 --rc genhtml_legend=1 00:39:43.070 --rc geninfo_all_blocks=1 00:39:43.070 --rc geninfo_unexecuted_blocks=1 00:39:43.070 00:39:43.070 ' 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:43.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:43.070 --rc genhtml_branch_coverage=1 00:39:43.070 --rc genhtml_function_coverage=1 00:39:43.070 --rc genhtml_legend=1 00:39:43.070 --rc geninfo_all_blocks=1 00:39:43.070 --rc geninfo_unexecuted_blocks=1 00:39:43.070 00:39:43.070 ' 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:43.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:43.070 --rc genhtml_branch_coverage=1 00:39:43.070 --rc genhtml_function_coverage=1 00:39:43.070 --rc genhtml_legend=1 00:39:43.070 --rc geninfo_all_blocks=1 00:39:43.070 --rc geninfo_unexecuted_blocks=1 00:39:43.070 00:39:43.070 ' 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:43.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:43.070 --rc genhtml_branch_coverage=1 00:39:43.070 --rc genhtml_function_coverage=1 00:39:43.070 --rc genhtml_legend=1 00:39:43.070 --rc geninfo_all_blocks=1 00:39:43.070 --rc geninfo_unexecuted_blocks=1 00:39:43.070 00:39:43.070 ' 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:39:43.070 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:43.071 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:43.071 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:43.071 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:43.071 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:43.071 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:43.071 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:43.071 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:43.071 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:43.071 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:43.071 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:39:43.071 03:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:48.343 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:48.343 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:48.344 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:48.344 Found net devices under 0000:af:00.0: cvl_0_0 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:48.344 Found net devices under 0000:af:00.1: cvl_0_1 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:48.344 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:48.603 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:48.603 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:48.603 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:48.603 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:48.603 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:48.603 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:48.603 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:48.603 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:48.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:48.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:39:48.603 00:39:48.603 --- 10.0.0.2 ping statistics --- 00:39:48.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:48.603 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:39:48.603 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:48.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:48.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:39:48.603 00:39:48.603 --- 10.0.0.1 ping statistics --- 00:39:48.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:48.603 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:39:48.603 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:48.603 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:39:48.603 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:48.603 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:48.603 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:48.603 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:48.603 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:48.603 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:48.603 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:48.862 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:39:48.862 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:48.862 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:48.863 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:48.863 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=411841 00:39:48.863 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 411841 00:39:48.863 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:48.863 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 411841 ']' 00:39:48.863 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:48.863 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:48.863 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:48.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:48.863 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:48.863 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:48.863 [2024-12-14 03:21:03.804007] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:48.863 [2024-12-14 03:21:03.804946] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:48.863 [2024-12-14 03:21:03.804980] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:48.863 [2024-12-14 03:21:03.885996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:48.863 [2024-12-14 03:21:03.907184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:48.863 [2024-12-14 03:21:03.907218] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:48.863 [2024-12-14 03:21:03.907226] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:48.863 [2024-12-14 03:21:03.907231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:48.863 [2024-12-14 03:21:03.907236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:48.863 [2024-12-14 03:21:03.907702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:48.863 [2024-12-14 03:21:03.969829] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:48.863 [2024-12-14 03:21:03.970019] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:48.863 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:48.863 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:48.863 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:48.863 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:48.863 03:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:49.122 [2024-12-14 03:21:04.032393] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:49.122 Malloc0 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:49.122 [2024-12-14 03:21:04.112525] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=411863 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 411863 /var/tmp/bdevperf.sock 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 411863 ']' 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:49.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:49.122 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:49.122 [2024-12-14 03:21:04.163619] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:49.122 [2024-12-14 03:21:04.163665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid411863 ] 00:39:49.122 [2024-12-14 03:21:04.240567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:49.381 [2024-12-14 03:21:04.263799] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:49.381 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:49.381 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:49.381 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:49.381 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.381 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:49.381 NVMe0n1 00:39:49.381 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.381 03:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:49.381 Running I/O for 10 seconds... 00:39:51.694 11850.00 IOPS, 46.29 MiB/s [2024-12-14T02:21:07.763Z] 12275.50 IOPS, 47.95 MiB/s [2024-12-14T02:21:08.699Z] 12306.33 IOPS, 48.07 MiB/s [2024-12-14T02:21:09.635Z] 12423.00 IOPS, 48.53 MiB/s [2024-12-14T02:21:10.571Z] 12439.80 IOPS, 48.59 MiB/s [2024-12-14T02:21:11.946Z] 12400.67 IOPS, 48.44 MiB/s [2024-12-14T02:21:12.883Z] 12415.43 IOPS, 48.50 MiB/s [2024-12-14T02:21:13.819Z] 12416.88 IOPS, 48.50 MiB/s [2024-12-14T02:21:14.756Z] 12483.78 IOPS, 48.76 MiB/s [2024-12-14T02:21:14.756Z] 12495.50 IOPS, 48.81 MiB/s 00:39:59.623 Latency(us) 00:39:59.623 [2024-12-14T02:21:14.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:59.623 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:39:59.623 Verification LBA range: start 0x0 length 0x4000 00:39:59.623 NVMe0n1 : 10.06 12520.83 48.91 0.00 0.00 81533.80 18849.40 53926.77 00:39:59.623 [2024-12-14T02:21:14.756Z] =================================================================================================================== 00:39:59.623 [2024-12-14T02:21:14.756Z] Total : 12520.83 48.91 0.00 0.00 81533.80 18849.40 53926.77 00:39:59.623 { 00:39:59.623 "results": [ 00:39:59.623 { 00:39:59.623 "job": "NVMe0n1", 00:39:59.623 "core_mask": "0x1", 00:39:59.623 "workload": "verify", 00:39:59.623 "status": "finished", 00:39:59.623 "verify_range": { 00:39:59.623 "start": 0, 00:39:59.623 "length": 16384 00:39:59.623 }, 00:39:59.623 "queue_depth": 1024, 00:39:59.623 "io_size": 4096, 00:39:59.623 "runtime": 10.059957, 00:39:59.623 "iops": 12520.82886636593, 00:39:59.623 "mibps": 48.90948775924191, 00:39:59.623 "io_failed": 0, 00:39:59.623 "io_timeout": 0, 00:39:59.623 "avg_latency_us": 81533.80463033511, 00:39:59.623 "min_latency_us": 18849.401904761904, 00:39:59.623 "max_latency_us": 53926.76571428571 00:39:59.623 } 00:39:59.623 ], 00:39:59.623 "core_count": 1 00:39:59.624 } 00:39:59.624 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 411863 00:39:59.624 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 411863 ']' 00:39:59.624 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 411863 00:39:59.624 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:59.624 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:59.624 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 411863 00:39:59.624 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:59.624 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:59.624 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 411863' 00:39:59.624 killing process with pid 411863 00:39:59.624 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 411863 00:39:59.624 Received shutdown signal, test time was about 10.000000 seconds 00:39:59.624 00:39:59.624 Latency(us) 00:39:59.624 [2024-12-14T02:21:14.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:59.624 [2024-12-14T02:21:14.757Z] =================================================================================================================== 00:39:59.624 [2024-12-14T02:21:14.757Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:59.624 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 411863 00:39:59.883 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:39:59.883 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:39:59.883 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:59.883 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:39:59.883 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:59.883 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:39:59.883 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:59.883 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:59.883 rmmod nvme_tcp 00:39:59.883 rmmod nvme_fabrics 00:39:59.883 rmmod nvme_keyring 00:39:59.883 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:59.883 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:39:59.883 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:39:59.883 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 411841 ']' 00:39:59.883 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 411841 00:39:59.883 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 411841 ']' 00:39:59.883 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 411841 00:39:59.883 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:59.883 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:59.883 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 411841 00:39:59.883 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:59.883 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:59.883 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 411841' 00:39:59.883 killing process with pid 411841 00:39:59.883 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 411841 00:39:59.883 03:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 411841 00:40:00.145 03:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:00.145 03:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:00.145 03:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:00.145 03:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:40:00.145 03:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:40:00.145 03:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:00.145 03:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:40:00.145 03:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:00.145 03:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:00.145 03:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:00.145 03:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:00.145 03:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:02.049 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:02.308 00:40:02.308 real 0m19.481s 00:40:02.308 user 0m22.497s 00:40:02.308 sys 0m6.177s 00:40:02.308 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:02.308 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:02.308 ************************************ 00:40:02.308 END TEST nvmf_queue_depth 00:40:02.308 ************************************ 00:40:02.308 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:02.308 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:02.308 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:02.308 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:02.308 ************************************ 00:40:02.308 START TEST nvmf_target_multipath 00:40:02.308 ************************************ 00:40:02.308 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:02.308 * Looking for test storage... 00:40:02.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:02.308 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:02.308 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:40:02.308 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:02.308 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:02.308 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:02.308 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:02.308 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:02.308 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:40:02.308 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:40:02.308 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:40:02.308 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:40:02.308 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:40:02.308 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:02.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.309 --rc genhtml_branch_coverage=1 00:40:02.309 --rc genhtml_function_coverage=1 00:40:02.309 --rc genhtml_legend=1 00:40:02.309 --rc geninfo_all_blocks=1 00:40:02.309 --rc geninfo_unexecuted_blocks=1 00:40:02.309 00:40:02.309 ' 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:02.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.309 --rc genhtml_branch_coverage=1 00:40:02.309 --rc genhtml_function_coverage=1 00:40:02.309 --rc genhtml_legend=1 00:40:02.309 --rc geninfo_all_blocks=1 00:40:02.309 --rc geninfo_unexecuted_blocks=1 00:40:02.309 00:40:02.309 ' 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:02.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.309 --rc genhtml_branch_coverage=1 00:40:02.309 --rc genhtml_function_coverage=1 00:40:02.309 --rc genhtml_legend=1 00:40:02.309 --rc geninfo_all_blocks=1 00:40:02.309 --rc geninfo_unexecuted_blocks=1 00:40:02.309 00:40:02.309 ' 00:40:02.309 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:02.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.309 --rc genhtml_branch_coverage=1 00:40:02.309 --rc genhtml_function_coverage=1 00:40:02.309 --rc genhtml_legend=1 00:40:02.309 --rc geninfo_all_blocks=1 00:40:02.309 --rc geninfo_unexecuted_blocks=1 00:40:02.309 00:40:02.309 ' 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:02.568 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:40:02.569 03:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:09.136 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:09.136 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:09.136 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:09.137 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:09.137 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:09.137 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:09.137 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:09.137 03:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:09.137 Found net devices under 0000:af:00.0: cvl_0_0 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:09.137 Found net devices under 0000:af:00.1: cvl_0_1 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:09.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:09.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:40:09.137 00:40:09.137 --- 10.0.0.2 ping statistics --- 00:40:09.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:09.137 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:09.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:09.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:40:09.137 00:40:09.137 --- 10.0.0.1 ping statistics --- 00:40:09.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:09.137 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:40:09.137 only one NIC for nvmf test 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:09.137 rmmod nvme_tcp 00:40:09.137 rmmod nvme_fabrics 00:40:09.137 rmmod nvme_keyring 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:09.137 03:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:10.518 00:40:10.518 real 0m8.319s 00:40:10.518 user 0m1.825s 00:40:10.518 sys 0m4.379s 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:10.518 ************************************ 00:40:10.518 END TEST nvmf_target_multipath 00:40:10.518 ************************************ 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:10.518 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:10.777 ************************************ 00:40:10.777 START TEST nvmf_zcopy 00:40:10.777 ************************************ 00:40:10.777 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:10.777 * Looking for test storage... 00:40:10.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:10.777 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:10.777 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:40:10.777 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:10.777 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:10.777 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:10.777 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:10.777 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:10.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.778 --rc genhtml_branch_coverage=1 00:40:10.778 --rc genhtml_function_coverage=1 00:40:10.778 --rc genhtml_legend=1 00:40:10.778 --rc geninfo_all_blocks=1 00:40:10.778 --rc geninfo_unexecuted_blocks=1 00:40:10.778 00:40:10.778 ' 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:10.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.778 --rc genhtml_branch_coverage=1 00:40:10.778 --rc genhtml_function_coverage=1 00:40:10.778 --rc genhtml_legend=1 00:40:10.778 --rc geninfo_all_blocks=1 00:40:10.778 --rc geninfo_unexecuted_blocks=1 00:40:10.778 00:40:10.778 ' 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:10.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.778 --rc genhtml_branch_coverage=1 00:40:10.778 --rc genhtml_function_coverage=1 00:40:10.778 --rc genhtml_legend=1 00:40:10.778 --rc geninfo_all_blocks=1 00:40:10.778 --rc geninfo_unexecuted_blocks=1 00:40:10.778 00:40:10.778 ' 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:10.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.778 --rc genhtml_branch_coverage=1 00:40:10.778 --rc genhtml_function_coverage=1 00:40:10.778 --rc genhtml_legend=1 00:40:10.778 --rc geninfo_all_blocks=1 00:40:10.778 --rc geninfo_unexecuted_blocks=1 00:40:10.778 00:40:10.778 ' 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:10.778 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:10.779 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:10.779 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:10.779 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:10.779 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:40:10.779 03:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:17.348 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:17.348 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:17.348 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:17.349 Found net devices under 0000:af:00.0: cvl_0_0 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:17.349 Found net devices under 0000:af:00.1: cvl_0_1 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:17.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:17.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:40:17.349 00:40:17.349 --- 10.0.0.2 ping statistics --- 00:40:17.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:17.349 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:17.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:17.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:40:17.349 00:40:17.349 --- 10.0.0.1 ping statistics --- 00:40:17.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:17.349 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=416853 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 416853 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 416853 ']' 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:17.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:17.349 [2024-12-14 03:21:31.719404] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:17.349 [2024-12-14 03:21:31.720359] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:17.349 [2024-12-14 03:21:31.720400] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:17.349 [2024-12-14 03:21:31.800100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:17.349 [2024-12-14 03:21:31.821148] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:17.349 [2024-12-14 03:21:31.821181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:17.349 [2024-12-14 03:21:31.821188] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:17.349 [2024-12-14 03:21:31.821194] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:17.349 [2024-12-14 03:21:31.821199] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:17.349 [2024-12-14 03:21:31.821667] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:17.349 [2024-12-14 03:21:31.884040] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:17.349 [2024-12-14 03:21:31.884264] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:17.349 [2024-12-14 03:21:31.950414] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:17.349 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:17.350 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:17.350 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:17.350 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:17.350 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:17.350 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:17.350 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:17.350 [2024-12-14 03:21:31.978543] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:17.350 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:17.350 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:17.350 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:17.350 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:17.350 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:17.350 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:40:17.350 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:17.350 03:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:17.350 malloc0 00:40:17.350 03:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:17.350 03:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:40:17.350 03:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:17.350 03:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:17.350 03:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:17.350 03:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:40:17.350 03:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:40:17.350 03:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:40:17.350 03:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:40:17.350 03:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:17.350 03:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:17.350 { 00:40:17.350 "params": { 00:40:17.350 "name": "Nvme$subsystem", 00:40:17.350 "trtype": "$TEST_TRANSPORT", 00:40:17.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:17.350 "adrfam": "ipv4", 00:40:17.350 "trsvcid": "$NVMF_PORT", 00:40:17.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:17.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:17.350 "hdgst": ${hdgst:-false}, 00:40:17.350 "ddgst": ${ddgst:-false} 00:40:17.350 }, 00:40:17.350 "method": "bdev_nvme_attach_controller" 00:40:17.350 } 00:40:17.350 EOF 00:40:17.350 )") 00:40:17.350 03:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:40:17.350 03:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:40:17.350 03:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:40:17.350 03:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:17.350 "params": { 00:40:17.350 "name": "Nvme1", 00:40:17.350 "trtype": "tcp", 00:40:17.350 "traddr": "10.0.0.2", 00:40:17.350 "adrfam": "ipv4", 00:40:17.350 "trsvcid": "4420", 00:40:17.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:17.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:17.350 "hdgst": false, 00:40:17.350 "ddgst": false 00:40:17.350 }, 00:40:17.350 "method": "bdev_nvme_attach_controller" 00:40:17.350 }' 00:40:17.350 [2024-12-14 03:21:32.073385] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:17.350 [2024-12-14 03:21:32.073436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid416879 ] 00:40:17.350 [2024-12-14 03:21:32.145846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:17.350 [2024-12-14 03:21:32.168288] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:17.350 Running I/O for 10 seconds... 00:40:19.661 8562.00 IOPS, 66.89 MiB/s [2024-12-14T02:21:35.730Z] 8620.50 IOPS, 67.35 MiB/s [2024-12-14T02:21:36.666Z] 8638.00 IOPS, 67.48 MiB/s [2024-12-14T02:21:37.603Z] 8649.75 IOPS, 67.58 MiB/s [2024-12-14T02:21:38.540Z] 8657.20 IOPS, 67.63 MiB/s [2024-12-14T02:21:39.918Z] 8663.00 IOPS, 67.68 MiB/s [2024-12-14T02:21:40.854Z] 8667.71 IOPS, 67.72 MiB/s [2024-12-14T02:21:41.791Z] 8657.38 IOPS, 67.64 MiB/s [2024-12-14T02:21:42.728Z] 8658.89 IOPS, 67.65 MiB/s [2024-12-14T02:21:42.728Z] 8658.40 IOPS, 67.64 MiB/s 00:40:27.595 Latency(us) 00:40:27.595 [2024-12-14T02:21:42.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:27.595 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:40:27.595 Verification LBA range: start 0x0 length 0x1000 00:40:27.595 Nvme1n1 : 10.05 8626.47 67.39 0.00 0.00 14741.47 2012.89 45188.63 00:40:27.595 [2024-12-14T02:21:42.728Z] =================================================================================================================== 00:40:27.595 [2024-12-14T02:21:42.728Z] Total : 8626.47 67.39 0.00 0.00 14741.47 2012.89 45188.63 00:40:27.595 03:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=417006 00:40:27.595 03:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:40:27.595 03:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:27.595 03:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:40:27.595 03:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:40:27.595 03:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:40:27.595 03:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:40:27.595 03:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:27.595 03:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:27.595 { 00:40:27.595 "params": { 00:40:27.595 "name": "Nvme$subsystem", 00:40:27.595 "trtype": "$TEST_TRANSPORT", 00:40:27.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:27.595 "adrfam": "ipv4", 00:40:27.595 "trsvcid": "$NVMF_PORT", 00:40:27.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:27.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:27.595 "hdgst": ${hdgst:-false}, 00:40:27.595 "ddgst": ${ddgst:-false} 00:40:27.595 }, 00:40:27.595 "method": "bdev_nvme_attach_controller" 00:40:27.595 } 00:40:27.595 EOF 00:40:27.595 )") 00:40:27.595 03:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:40:27.595 [2024-12-14 03:21:42.709992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.595 [2024-12-14 03:21:42.710024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.595 03:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:40:27.595 03:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:40:27.595 03:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:27.595 "params": { 00:40:27.595 "name": "Nvme1", 00:40:27.595 "trtype": "tcp", 00:40:27.595 "traddr": "10.0.0.2", 00:40:27.595 "adrfam": "ipv4", 00:40:27.595 "trsvcid": "4420", 00:40:27.595 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:27.595 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:27.595 "hdgst": false, 00:40:27.595 "ddgst": false 00:40:27.595 }, 00:40:27.595 "method": "bdev_nvme_attach_controller" 00:40:27.595 }' 00:40:27.595 [2024-12-14 03:21:42.721963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.595 [2024-12-14 03:21:42.721976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.855 [2024-12-14 03:21:42.733961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.855 [2024-12-14 03:21:42.733976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.855 [2024-12-14 03:21:42.745958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.855 [2024-12-14 03:21:42.745968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.855 [2024-12-14 03:21:42.750583] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:27.855 [2024-12-14 03:21:42.750625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417006 ] 00:40:27.855 [2024-12-14 03:21:42.757957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.855 [2024-12-14 03:21:42.757968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.855 [2024-12-14 03:21:42.769957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.855 [2024-12-14 03:21:42.769967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.855 [2024-12-14 03:21:42.781959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.855 [2024-12-14 03:21:42.781970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.855 [2024-12-14 03:21:42.793961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.855 [2024-12-14 03:21:42.793972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.855 [2024-12-14 03:21:42.805961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.855 [2024-12-14 03:21:42.805972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.855 [2024-12-14 03:21:42.817958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.855 [2024-12-14 03:21:42.817968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.855 [2024-12-14 03:21:42.825172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:27.855 [2024-12-14 03:21:42.829960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.855 [2024-12-14 03:21:42.829970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.855 [2024-12-14 03:21:42.841967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.855 [2024-12-14 03:21:42.841984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.855 [2024-12-14 03:21:42.847300] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:27.855 [2024-12-14 03:21:42.853962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.855 [2024-12-14 03:21:42.853975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.855 [2024-12-14 03:21:42.865983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.855 [2024-12-14 03:21:42.866011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.855 [2024-12-14 03:21:42.877967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.855 [2024-12-14 03:21:42.877983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.855 [2024-12-14 03:21:42.889965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.855 [2024-12-14 03:21:42.889980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.855 [2024-12-14 03:21:42.901975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.855 [2024-12-14 03:21:42.901990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.855 [2024-12-14 03:21:42.913966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.855 [2024-12-14 03:21:42.913981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.855 [2024-12-14 03:21:42.926016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.855 [2024-12-14 03:21:42.926037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.855 [2024-12-14 03:21:42.937967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.855 [2024-12-14 03:21:42.937983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.855 [2024-12-14 03:21:42.949963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.855 [2024-12-14 03:21:42.949978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.855 [2024-12-14 03:21:42.961964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.855 [2024-12-14 03:21:42.961979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.855 [2024-12-14 03:21:42.973966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.855 [2024-12-14 03:21:42.973982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.855 [2024-12-14 03:21:42.985963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.855 [2024-12-14 03:21:42.985973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.114 [2024-12-14 03:21:42.997958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.114 [2024-12-14 03:21:42.997968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.114 [2024-12-14 03:21:43.009961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.114 [2024-12-14 03:21:43.009974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.114 [2024-12-14 03:21:43.021959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.114 [2024-12-14 03:21:43.021969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.114 [2024-12-14 03:21:43.033960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.114 [2024-12-14 03:21:43.033970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.114 [2024-12-14 03:21:43.045960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.114 [2024-12-14 03:21:43.045970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.114 [2024-12-14 03:21:43.057962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.114 [2024-12-14 03:21:43.057975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.114 [2024-12-14 03:21:43.069958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.114 [2024-12-14 03:21:43.069968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.114 [2024-12-14 03:21:43.081960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.114 [2024-12-14 03:21:43.081971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.114 [2024-12-14 03:21:43.093961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.114 [2024-12-14 03:21:43.093974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.114 [2024-12-14 03:21:43.105969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.114 [2024-12-14 03:21:43.105986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.114 Running I/O for 5 seconds... 00:40:28.114 [2024-12-14 03:21:43.123169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.114 [2024-12-14 03:21:43.123189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.114 [2024-12-14 03:21:43.137702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.114 [2024-12-14 03:21:43.137722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.114 [2024-12-14 03:21:43.149634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.114 [2024-12-14 03:21:43.149653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.114 [2024-12-14 03:21:43.163905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.114 [2024-12-14 03:21:43.163928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.114 [2024-12-14 03:21:43.178639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.114 [2024-12-14 03:21:43.178658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.114 [2024-12-14 03:21:43.189794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.114 [2024-12-14 03:21:43.189815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.114 [2024-12-14 03:21:43.203715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.114 [2024-12-14 03:21:43.203734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.114 [2024-12-14 03:21:43.218329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.114 [2024-12-14 03:21:43.218348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.114 [2024-12-14 03:21:43.229710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.115 [2024-12-14 03:21:43.229729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.115 [2024-12-14 03:21:43.243749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.115 [2024-12-14 03:21:43.243768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.374 [2024-12-14 03:21:43.258547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.374 [2024-12-14 03:21:43.258565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.374 [2024-12-14 03:21:43.273514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.374 [2024-12-14 03:21:43.273533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.374 [2024-12-14 03:21:43.288132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.374 [2024-12-14 03:21:43.288151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.374 [2024-12-14 03:21:43.302509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.374 [2024-12-14 03:21:43.302528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.374 [2024-12-14 03:21:43.313920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.374 [2024-12-14 03:21:43.313940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.374 [2024-12-14 03:21:43.327582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.374 [2024-12-14 03:21:43.327602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.374 [2024-12-14 03:21:43.342064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.374 [2024-12-14 03:21:43.342083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.374 [2024-12-14 03:21:43.354430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.374 [2024-12-14 03:21:43.354449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.374 [2024-12-14 03:21:43.367743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.374 [2024-12-14 03:21:43.367762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.374 [2024-12-14 03:21:43.382558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.374 [2024-12-14 03:21:43.382576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.374 [2024-12-14 03:21:43.398070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.374 [2024-12-14 03:21:43.398089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.374 [2024-12-14 03:21:43.411124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.374 [2024-12-14 03:21:43.411144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.374 [2024-12-14 03:21:43.425831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.374 [2024-12-14 03:21:43.425855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.374 [2024-12-14 03:21:43.436873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.374 [2024-12-14 03:21:43.436892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.374 [2024-12-14 03:21:43.451391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.374 [2024-12-14 03:21:43.451410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.374 [2024-12-14 03:21:43.465960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.374 [2024-12-14 03:21:43.465979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.374 [2024-12-14 03:21:43.478832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.374 [2024-12-14 03:21:43.478850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.374 [2024-12-14 03:21:43.491575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.374 [2024-12-14 03:21:43.491594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.374 [2024-12-14 03:21:43.506358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.374 [2024-12-14 03:21:43.506377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.633 [2024-12-14 03:21:43.521504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.633 [2024-12-14 03:21:43.521523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.633 [2024-12-14 03:21:43.535927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.633 [2024-12-14 03:21:43.535946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.633 [2024-12-14 03:21:43.550911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.633 [2024-12-14 03:21:43.550929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.633 [2024-12-14 03:21:43.566127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.633 [2024-12-14 03:21:43.566146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.633 [2024-12-14 03:21:43.578449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.633 [2024-12-14 03:21:43.578467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.633 [2024-12-14 03:21:43.591804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.633 [2024-12-14 03:21:43.591823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.633 [2024-12-14 03:21:43.606568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.633 [2024-12-14 03:21:43.606586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.633 [2024-12-14 03:21:43.622041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.633 [2024-12-14 03:21:43.622060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.633 [2024-12-14 03:21:43.634375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.633 [2024-12-14 03:21:43.634393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.633 [2024-12-14 03:21:43.647096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.633 [2024-12-14 03:21:43.647115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.633 [2024-12-14 03:21:43.657945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.633 [2024-12-14 03:21:43.657963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.633 [2024-12-14 03:21:43.672090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.633 [2024-12-14 03:21:43.672110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.633 [2024-12-14 03:21:43.686922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.633 [2024-12-14 03:21:43.686941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.633 [2024-12-14 03:21:43.701803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.633 [2024-12-14 03:21:43.701822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.633 [2024-12-14 03:21:43.715793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.634 [2024-12-14 03:21:43.715812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.634 [2024-12-14 03:21:43.730636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.634 [2024-12-14 03:21:43.730655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.634 [2024-12-14 03:21:43.745403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.634 [2024-12-14 03:21:43.745422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.634 [2024-12-14 03:21:43.759109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.634 [2024-12-14 03:21:43.759127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.893 [2024-12-14 03:21:43.770446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.893 [2024-12-14 03:21:43.770465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.893 [2024-12-14 03:21:43.783672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.893 [2024-12-14 03:21:43.783691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.893 [2024-12-14 03:21:43.798644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.893 [2024-12-14 03:21:43.798662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.893 [2024-12-14 03:21:43.813840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.893 [2024-12-14 03:21:43.813859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.893 [2024-12-14 03:21:43.828116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.893 [2024-12-14 03:21:43.828135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.893 [2024-12-14 03:21:43.842855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.893 [2024-12-14 03:21:43.842874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.893 [2024-12-14 03:21:43.858868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.893 [2024-12-14 03:21:43.858888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.893 [2024-12-14 03:21:43.869355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.893 [2024-12-14 03:21:43.869375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.893 [2024-12-14 03:21:43.883811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.893 [2024-12-14 03:21:43.883831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.893 [2024-12-14 03:21:43.898635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.893 [2024-12-14 03:21:43.898654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.893 [2024-12-14 03:21:43.914344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.893 [2024-12-14 03:21:43.914363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.893 [2024-12-14 03:21:43.929741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.893 [2024-12-14 03:21:43.929760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.893 [2024-12-14 03:21:43.943995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.893 [2024-12-14 03:21:43.944014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.893 [2024-12-14 03:21:43.958905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.893 [2024-12-14 03:21:43.958923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.893 [2024-12-14 03:21:43.974257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.893 [2024-12-14 03:21:43.974276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.893 [2024-12-14 03:21:43.990061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.893 [2024-12-14 03:21:43.990080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.893 [2024-12-14 03:21:44.004185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.893 [2024-12-14 03:21:44.004203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.893 [2024-12-14 03:21:44.018802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.893 [2024-12-14 03:21:44.018821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.152 [2024-12-14 03:21:44.033644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.152 [2024-12-14 03:21:44.033663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.152 [2024-12-14 03:21:44.048001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.152 [2024-12-14 03:21:44.048020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.152 [2024-12-14 03:21:44.062570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.152 [2024-12-14 03:21:44.062588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.152 [2024-12-14 03:21:44.078439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.152 [2024-12-14 03:21:44.078459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.152 [2024-12-14 03:21:44.093848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.152 [2024-12-14 03:21:44.093867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.152 [2024-12-14 03:21:44.107192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.152 [2024-12-14 03:21:44.107212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.152 16758.00 IOPS, 130.92 MiB/s [2024-12-14T02:21:44.285Z] [2024-12-14 03:21:44.122265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.152 [2024-12-14 03:21:44.122284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.152 [2024-12-14 03:21:44.133969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.152 [2024-12-14 03:21:44.133989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.152 [2024-12-14 03:21:44.148050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.152 [2024-12-14 03:21:44.148069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.152 [2024-12-14 03:21:44.162717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.152 [2024-12-14 03:21:44.162735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.152 [2024-12-14 03:21:44.177939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.152 [2024-12-14 03:21:44.177958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.152 [2024-12-14 03:21:44.191728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.152 [2024-12-14 03:21:44.191747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.152 [2024-12-14 03:21:44.206562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.152 [2024-12-14 03:21:44.206591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.152 [2024-12-14 03:21:44.218577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.152 [2024-12-14 03:21:44.218601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.152 [2024-12-14 03:21:44.231701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.152 [2024-12-14 03:21:44.231720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.152 [2024-12-14 03:21:44.246643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.152 [2024-12-14 03:21:44.246662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.152 [2024-12-14 03:21:44.262359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.152 [2024-12-14 03:21:44.262378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.152 [2024-12-14 03:21:44.278380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.152 [2024-12-14 03:21:44.278399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.411 [2024-12-14 03:21:44.293768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.411 [2024-12-14 03:21:44.293786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.411 [2024-12-14 03:21:44.308282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.411 [2024-12-14 03:21:44.308301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.411 [2024-12-14 03:21:44.322772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.411 [2024-12-14 03:21:44.322790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.411 [2024-12-14 03:21:44.337864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.411 [2024-12-14 03:21:44.337883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.412 [2024-12-14 03:21:44.351674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.412 [2024-12-14 03:21:44.351693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.412 [2024-12-14 03:21:44.366087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.412 [2024-12-14 03:21:44.366106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.412 [2024-12-14 03:21:44.378825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.412 [2024-12-14 03:21:44.378844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.412 [2024-12-14 03:21:44.391244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.412 [2024-12-14 03:21:44.391264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.412 [2024-12-14 03:21:44.405826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.412 [2024-12-14 03:21:44.405844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.412 [2024-12-14 03:21:44.416705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.412 [2024-12-14 03:21:44.416724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.412 [2024-12-14 03:21:44.431434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.412 [2024-12-14 03:21:44.431452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.412 [2024-12-14 03:21:44.445826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.412 [2024-12-14 03:21:44.445845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.412 [2024-12-14 03:21:44.458768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.412 [2024-12-14 03:21:44.458786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.412 [2024-12-14 03:21:44.474091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.412 [2024-12-14 03:21:44.474110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.412 [2024-12-14 03:21:44.485201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.412 [2024-12-14 03:21:44.485225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.412 [2024-12-14 03:21:44.499363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.412 [2024-12-14 03:21:44.499382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.412 [2024-12-14 03:21:44.510359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.412 [2024-12-14 03:21:44.510377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.412 [2024-12-14 03:21:44.523842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.412 [2024-12-14 03:21:44.523862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.412 [2024-12-14 03:21:44.538655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.412 [2024-12-14 03:21:44.538674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.671 [2024-12-14 03:21:44.554915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.671 [2024-12-14 03:21:44.554933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.671 [2024-12-14 03:21:44.570610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.671 [2024-12-14 03:21:44.570629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.671 [2024-12-14 03:21:44.582627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.671 [2024-12-14 03:21:44.582646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.671 [2024-12-14 03:21:44.595862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.671 [2024-12-14 03:21:44.595882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.671 [2024-12-14 03:21:44.610794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.671 [2024-12-14 03:21:44.610812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.671 [2024-12-14 03:21:44.625997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.671 [2024-12-14 03:21:44.626017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.671 [2024-12-14 03:21:44.639499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.671 [2024-12-14 03:21:44.639518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.671 [2024-12-14 03:21:44.654153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.671 [2024-12-14 03:21:44.654172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.671 [2024-12-14 03:21:44.667627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.671 [2024-12-14 03:21:44.667646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.671 [2024-12-14 03:21:44.682283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.671 [2024-12-14 03:21:44.682301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.671 [2024-12-14 03:21:44.693306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.671 [2024-12-14 03:21:44.693330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.671 [2024-12-14 03:21:44.708310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.671 [2024-12-14 03:21:44.708333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.671 [2024-12-14 03:21:44.723067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.671 [2024-12-14 03:21:44.723086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.671 [2024-12-14 03:21:44.738176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.671 [2024-12-14 03:21:44.738196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.671 [2024-12-14 03:21:44.751775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.671 [2024-12-14 03:21:44.751799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.671 [2024-12-14 03:21:44.766402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.671 [2024-12-14 03:21:44.766420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.671 [2024-12-14 03:21:44.781895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.671 [2024-12-14 03:21:44.781915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.671 [2024-12-14 03:21:44.795377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.671 [2024-12-14 03:21:44.795395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.930 [2024-12-14 03:21:44.809961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.930 [2024-12-14 03:21:44.809980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.930 [2024-12-14 03:21:44.821564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.930 [2024-12-14 03:21:44.821584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.930 [2024-12-14 03:21:44.835324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.930 [2024-12-14 03:21:44.835343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.930 [2024-12-14 03:21:44.845036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.930 [2024-12-14 03:21:44.845055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.930 [2024-12-14 03:21:44.860082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.930 [2024-12-14 03:21:44.860101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.930 [2024-12-14 03:21:44.875089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.930 [2024-12-14 03:21:44.875109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.930 [2024-12-14 03:21:44.890103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.930 [2024-12-14 03:21:44.890123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.931 [2024-12-14 03:21:44.900800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.931 [2024-12-14 03:21:44.900819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.931 [2024-12-14 03:21:44.915883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.931 [2024-12-14 03:21:44.915902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.931 [2024-12-14 03:21:44.930113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.931 [2024-12-14 03:21:44.930133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.931 [2024-12-14 03:21:44.942867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.931 [2024-12-14 03:21:44.942886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.931 [2024-12-14 03:21:44.957466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.931 [2024-12-14 03:21:44.957486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.931 [2024-12-14 03:21:44.971538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.931 [2024-12-14 03:21:44.971558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.931 [2024-12-14 03:21:44.985858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.931 [2024-12-14 03:21:44.985878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.931 [2024-12-14 03:21:44.998626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.931 [2024-12-14 03:21:44.998644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.931 [2024-12-14 03:21:45.011681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.931 [2024-12-14 03:21:45.011704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.931 [2024-12-14 03:21:45.026363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.931 [2024-12-14 03:21:45.026381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.931 [2024-12-14 03:21:45.038587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.931 [2024-12-14 03:21:45.038605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.931 [2024-12-14 03:21:45.054574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.931 [2024-12-14 03:21:45.054594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.189 [2024-12-14 03:21:45.069840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.189 [2024-12-14 03:21:45.069859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.189 [2024-12-14 03:21:45.084152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.189 [2024-12-14 03:21:45.084171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.189 [2024-12-14 03:21:45.099119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.189 [2024-12-14 03:21:45.099138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.189 [2024-12-14 03:21:45.114105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.189 [2024-12-14 03:21:45.114124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.189 16803.50 IOPS, 131.28 MiB/s [2024-12-14T02:21:45.322Z] [2024-12-14 03:21:45.126947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.189 [2024-12-14 03:21:45.126965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.189 [2024-12-14 03:21:45.139212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.189 [2024-12-14 03:21:45.139231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.189 [2024-12-14 03:21:45.150258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.189 [2024-12-14 03:21:45.150276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.189 [2024-12-14 03:21:45.164345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.189 [2024-12-14 03:21:45.164365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.189 [2024-12-14 03:21:45.179059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.189 [2024-12-14 03:21:45.179078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.189 [2024-12-14 03:21:45.193594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.189 [2024-12-14 03:21:45.193613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.189 [2024-12-14 03:21:45.207688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.189 [2024-12-14 03:21:45.207707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.189 [2024-12-14 03:21:45.222339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.189 [2024-12-14 03:21:45.222358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.189 [2024-12-14 03:21:45.237790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.189 [2024-12-14 03:21:45.237809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.189 [2024-12-14 03:21:45.251026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.189 [2024-12-14 03:21:45.251046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.189 [2024-12-14 03:21:45.261898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.189 [2024-12-14 03:21:45.261916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.189 [2024-12-14 03:21:45.275912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.189 [2024-12-14 03:21:45.275930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.189 [2024-12-14 03:21:45.290677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.189 [2024-12-14 03:21:45.290696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.189 [2024-12-14 03:21:45.306523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.189 [2024-12-14 03:21:45.306541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.189 [2024-12-14 03:21:45.319731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.189 [2024-12-14 03:21:45.319751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.448 [2024-12-14 03:21:45.334194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.448 [2024-12-14 03:21:45.334214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.448 [2024-12-14 03:21:45.346897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.448 [2024-12-14 03:21:45.346915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.448 [2024-12-14 03:21:45.359415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.448 [2024-12-14 03:21:45.359435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.448 [2024-12-14 03:21:45.374403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.448 [2024-12-14 03:21:45.374421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.448 [2024-12-14 03:21:45.385441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.448 [2024-12-14 03:21:45.385460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.448 [2024-12-14 03:21:45.399673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.448 [2024-12-14 03:21:45.399692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.448 [2024-12-14 03:21:45.414654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.448 [2024-12-14 03:21:45.414672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.448 [2024-12-14 03:21:45.429821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.448 [2024-12-14 03:21:45.429840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.448 [2024-12-14 03:21:45.442564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.448 [2024-12-14 03:21:45.442583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.448 [2024-12-14 03:21:45.455831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.448 [2024-12-14 03:21:45.455850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.448 [2024-12-14 03:21:45.470423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.448 [2024-12-14 03:21:45.470442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.448 [2024-12-14 03:21:45.481510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.448 [2024-12-14 03:21:45.481528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.448 [2024-12-14 03:21:45.496308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.448 [2024-12-14 03:21:45.496333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.448 [2024-12-14 03:21:45.510982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.448 [2024-12-14 03:21:45.511000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.448 [2024-12-14 03:21:45.526466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.448 [2024-12-14 03:21:45.526485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.448 [2024-12-14 03:21:45.537671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.448 [2024-12-14 03:21:45.537689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.448 [2024-12-14 03:21:45.551606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.448 [2024-12-14 03:21:45.551625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.448 [2024-12-14 03:21:45.566253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.448 [2024-12-14 03:21:45.566272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.707 [2024-12-14 03:21:45.581680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.707 [2024-12-14 03:21:45.581700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.707 [2024-12-14 03:21:45.596110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.707 [2024-12-14 03:21:45.596129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.707 [2024-12-14 03:21:45.610927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.707 [2024-12-14 03:21:45.610946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.707 [2024-12-14 03:21:45.626393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.707 [2024-12-14 03:21:45.626412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.707 [2024-12-14 03:21:45.638292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.707 [2024-12-14 03:21:45.638320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.707 [2024-12-14 03:21:45.654017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.707 [2024-12-14 03:21:45.654037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.707 [2024-12-14 03:21:45.666962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.707 [2024-12-14 03:21:45.666981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.707 [2024-12-14 03:21:45.679327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.707 [2024-12-14 03:21:45.679346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.707 [2024-12-14 03:21:45.690560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.707 [2024-12-14 03:21:45.690589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.707 [2024-12-14 03:21:45.703428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.707 [2024-12-14 03:21:45.703447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.707 [2024-12-14 03:21:45.718477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.707 [2024-12-14 03:21:45.718496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.707 [2024-12-14 03:21:45.733933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.707 [2024-12-14 03:21:45.733952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.707 [2024-12-14 03:21:45.748071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.707 [2024-12-14 03:21:45.748090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.707 [2024-12-14 03:21:45.762672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.707 [2024-12-14 03:21:45.762691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.707 [2024-12-14 03:21:45.778267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.707 [2024-12-14 03:21:45.778285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.707 [2024-12-14 03:21:45.793869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.707 [2024-12-14 03:21:45.793889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.707 [2024-12-14 03:21:45.806583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.707 [2024-12-14 03:21:45.806601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.707 [2024-12-14 03:21:45.819393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.707 [2024-12-14 03:21:45.819412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.707 [2024-12-14 03:21:45.830183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.707 [2024-12-14 03:21:45.830201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.966 [2024-12-14 03:21:45.843523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.966 [2024-12-14 03:21:45.843542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.966 [2024-12-14 03:21:45.858459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.966 [2024-12-14 03:21:45.858478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.966 [2024-12-14 03:21:45.870849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.966 [2024-12-14 03:21:45.870868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.966 [2024-12-14 03:21:45.885669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.966 [2024-12-14 03:21:45.885688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.966 [2024-12-14 03:21:45.896484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.966 [2024-12-14 03:21:45.896504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.966 [2024-12-14 03:21:45.911388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.966 [2024-12-14 03:21:45.911407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.966 [2024-12-14 03:21:45.926296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.966 [2024-12-14 03:21:45.926321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.966 [2024-12-14 03:21:45.941390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.966 [2024-12-14 03:21:45.941409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.966 [2024-12-14 03:21:45.956202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.967 [2024-12-14 03:21:45.956222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.967 [2024-12-14 03:21:45.970377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.967 [2024-12-14 03:21:45.970397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.967 [2024-12-14 03:21:45.982717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.967 [2024-12-14 03:21:45.982735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.967 [2024-12-14 03:21:45.995881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.967 [2024-12-14 03:21:45.995901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.967 [2024-12-14 03:21:46.010913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.967 [2024-12-14 03:21:46.010932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.967 [2024-12-14 03:21:46.026641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.967 [2024-12-14 03:21:46.026660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.967 [2024-12-14 03:21:46.037982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.967 [2024-12-14 03:21:46.038001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.967 [2024-12-14 03:21:46.052253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.967 [2024-12-14 03:21:46.052277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.967 [2024-12-14 03:21:46.066964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.967 [2024-12-14 03:21:46.066984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.967 [2024-12-14 03:21:46.081208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.967 [2024-12-14 03:21:46.081227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.967 [2024-12-14 03:21:46.094923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.967 [2024-12-14 03:21:46.094942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.226 [2024-12-14 03:21:46.107633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.226 [2024-12-14 03:21:46.107653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.226 [2024-12-14 03:21:46.122414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.226 [2024-12-14 03:21:46.122432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.226 16819.00 IOPS, 131.40 MiB/s [2024-12-14T02:21:46.359Z] [2024-12-14 03:21:46.137914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.226 [2024-12-14 03:21:46.137934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.226 [2024-12-14 03:21:46.151121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.226 [2024-12-14 03:21:46.151140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.226 [2024-12-14 03:21:46.163486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.226 [2024-12-14 03:21:46.163505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.226 [2024-12-14 03:21:46.173784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.226 [2024-12-14 03:21:46.173804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.226 [2024-12-14 03:21:46.187883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.226 [2024-12-14 03:21:46.187903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.226 [2024-12-14 03:21:46.201774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.226 [2024-12-14 03:21:46.201793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.226 [2024-12-14 03:21:46.214991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.226 [2024-12-14 03:21:46.215010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.226 [2024-12-14 03:21:46.229959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.226 [2024-12-14 03:21:46.229978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.226 [2024-12-14 03:21:46.243903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.226 [2024-12-14 03:21:46.243923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.226 [2024-12-14 03:21:46.258921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.226 [2024-12-14 03:21:46.258942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.226 [2024-12-14 03:21:46.273982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.226 [2024-12-14 03:21:46.274003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.226 [2024-12-14 03:21:46.288102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.226 [2024-12-14 03:21:46.288122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.226 [2024-12-14 03:21:46.302684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.226 [2024-12-14 03:21:46.302702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.226 [2024-12-14 03:21:46.317675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.226 [2024-12-14 03:21:46.317698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.226 [2024-12-14 03:21:46.331907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.226 [2024-12-14 03:21:46.331926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.226 [2024-12-14 03:21:46.345966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.226 [2024-12-14 03:21:46.345985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.226 [2024-12-14 03:21:46.358226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.226 [2024-12-14 03:21:46.358244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.485 [2024-12-14 03:21:46.372243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.485 [2024-12-14 03:21:46.372262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.485 [2024-12-14 03:21:46.386974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.485 [2024-12-14 03:21:46.386992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.485 [2024-12-14 03:21:46.401717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.485 [2024-12-14 03:21:46.401736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.485 [2024-12-14 03:21:46.414477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.485 [2024-12-14 03:21:46.414497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.485 [2024-12-14 03:21:46.427134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.485 [2024-12-14 03:21:46.427153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.485 [2024-12-14 03:21:46.442405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.485 [2024-12-14 03:21:46.442424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.485 [2024-12-14 03:21:46.452881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.485 [2024-12-14 03:21:46.452899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.485 [2024-12-14 03:21:46.467675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.485 [2024-12-14 03:21:46.467693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.485 [2024-12-14 03:21:46.482242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.485 [2024-12-14 03:21:46.482262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.485 [2024-12-14 03:21:46.494465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.485 [2024-12-14 03:21:46.494483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.485 [2024-12-14 03:21:46.507896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.485 [2024-12-14 03:21:46.507915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.486 [2024-12-14 03:21:46.522868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.486 [2024-12-14 03:21:46.522887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.486 [2024-12-14 03:21:46.538067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.486 [2024-12-14 03:21:46.538086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.486 [2024-12-14 03:21:46.550605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.486 [2024-12-14 03:21:46.550623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.486 [2024-12-14 03:21:46.563871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.486 [2024-12-14 03:21:46.563890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.486 [2024-12-14 03:21:46.579028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.486 [2024-12-14 03:21:46.579052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.486 [2024-12-14 03:21:46.594066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.486 [2024-12-14 03:21:46.594085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.486 [2024-12-14 03:21:46.607700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.486 [2024-12-14 03:21:46.607719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.745 [2024-12-14 03:21:46.622113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.745 [2024-12-14 03:21:46.622131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.745 [2024-12-14 03:21:46.635933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.745 [2024-12-14 03:21:46.635952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.745 [2024-12-14 03:21:46.650565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.745 [2024-12-14 03:21:46.650584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.745 [2024-12-14 03:21:46.665629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.745 [2024-12-14 03:21:46.665648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.745 [2024-12-14 03:21:46.679501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.745 [2024-12-14 03:21:46.679519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.745 [2024-12-14 03:21:46.694054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.745 [2024-12-14 03:21:46.694074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.745 [2024-12-14 03:21:46.705515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.745 [2024-12-14 03:21:46.705535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.745 [2024-12-14 03:21:46.719433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.745 [2024-12-14 03:21:46.719453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.745 [2024-12-14 03:21:46.734358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.745 [2024-12-14 03:21:46.734377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.745 [2024-12-14 03:21:46.747064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.745 [2024-12-14 03:21:46.747083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.745 [2024-12-14 03:21:46.761971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.745 [2024-12-14 03:21:46.761990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.745 [2024-12-14 03:21:46.774448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.745 [2024-12-14 03:21:46.774467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.745 [2024-12-14 03:21:46.787098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.745 [2024-12-14 03:21:46.787117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.745 [2024-12-14 03:21:46.801558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.745 [2024-12-14 03:21:46.801587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.745 [2024-12-14 03:21:46.815427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.745 [2024-12-14 03:21:46.815446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.745 [2024-12-14 03:21:46.830116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.745 [2024-12-14 03:21:46.830135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.745 [2024-12-14 03:21:46.841046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.745 [2024-12-14 03:21:46.841065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.745 [2024-12-14 03:21:46.855510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.745 [2024-12-14 03:21:46.855529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.745 [2024-12-14 03:21:46.866351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.745 [2024-12-14 03:21:46.866369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.004 [2024-12-14 03:21:46.879664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.004 [2024-12-14 03:21:46.879683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.004 [2024-12-14 03:21:46.894828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.004 [2024-12-14 03:21:46.894847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.004 [2024-12-14 03:21:46.910280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.004 [2024-12-14 03:21:46.910299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.004 [2024-12-14 03:21:46.922332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.004 [2024-12-14 03:21:46.922350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.004 [2024-12-14 03:21:46.935517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.004 [2024-12-14 03:21:46.935535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.004 [2024-12-14 03:21:46.946523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.004 [2024-12-14 03:21:46.946541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.004 [2024-12-14 03:21:46.962056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.004 [2024-12-14 03:21:46.962075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.004 [2024-12-14 03:21:46.975092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.004 [2024-12-14 03:21:46.975110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.004 [2024-12-14 03:21:46.989995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.004 [2024-12-14 03:21:46.990014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.004 [2024-12-14 03:21:47.001403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.004 [2024-12-14 03:21:47.001422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.004 [2024-12-14 03:21:47.016072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.004 [2024-12-14 03:21:47.016092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.004 [2024-12-14 03:21:47.030893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.004 [2024-12-14 03:21:47.030913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.004 [2024-12-14 03:21:47.046285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.004 [2024-12-14 03:21:47.046303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.004 [2024-12-14 03:21:47.062085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.004 [2024-12-14 03:21:47.062104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.004 [2024-12-14 03:21:47.075644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.004 [2024-12-14 03:21:47.075664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.004 [2024-12-14 03:21:47.090858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.004 [2024-12-14 03:21:47.090878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.004 [2024-12-14 03:21:47.105768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.004 [2024-12-14 03:21:47.105787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.004 [2024-12-14 03:21:47.119743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.004 [2024-12-14 03:21:47.119762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.004 16843.00 IOPS, 131.59 MiB/s [2024-12-14T02:21:47.137Z] [2024-12-14 03:21:47.134492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.004 [2024-12-14 03:21:47.134512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.263 [2024-12-14 03:21:47.149752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.263 [2024-12-14 03:21:47.149771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.263 [2024-12-14 03:21:47.164302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.263 [2024-12-14 03:21:47.164329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.263 [2024-12-14 03:21:47.178491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.263 [2024-12-14 03:21:47.178510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.263 [2024-12-14 03:21:47.190860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.264 [2024-12-14 03:21:47.190879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.264 [2024-12-14 03:21:47.203609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.264 [2024-12-14 03:21:47.203627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.264 [2024-12-14 03:21:47.218546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.264 [2024-12-14 03:21:47.218565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.264 [2024-12-14 03:21:47.233781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.264 [2024-12-14 03:21:47.233800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.264 [2024-12-14 03:21:47.247922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.264 [2024-12-14 03:21:47.247940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.264 [2024-12-14 03:21:47.262619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.264 [2024-12-14 03:21:47.262638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.264 [2024-12-14 03:21:47.277799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.264 [2024-12-14 03:21:47.277819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.264 [2024-12-14 03:21:47.291126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.264 [2024-12-14 03:21:47.291147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.264 [2024-12-14 03:21:47.301921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.264 [2024-12-14 03:21:47.301940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.264 [2024-12-14 03:21:47.315783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.264 [2024-12-14 03:21:47.315802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.264 [2024-12-14 03:21:47.330595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.264 [2024-12-14 03:21:47.330614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.264 [2024-12-14 03:21:47.345357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.264 [2024-12-14 03:21:47.345376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.264 [2024-12-14 03:21:47.360264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.264 [2024-12-14 03:21:47.360287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.264 [2024-12-14 03:21:47.374143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.264 [2024-12-14 03:21:47.374163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.264 [2024-12-14 03:21:47.387275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.264 [2024-12-14 03:21:47.387294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.523 [2024-12-14 03:21:47.401656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.523 [2024-12-14 03:21:47.401675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.523 [2024-12-14 03:21:47.415603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.523 [2024-12-14 03:21:47.415622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.523 [2024-12-14 03:21:47.430245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.523 [2024-12-14 03:21:47.430263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.523 [2024-12-14 03:21:47.446275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.523 [2024-12-14 03:21:47.446294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.523 [2024-12-14 03:21:47.462037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.523 [2024-12-14 03:21:47.462056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.523 [2024-12-14 03:21:47.474715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.523 [2024-12-14 03:21:47.474733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.523 [2024-12-14 03:21:47.487163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.523 [2024-12-14 03:21:47.487181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.523 [2024-12-14 03:21:47.498440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.523 [2024-12-14 03:21:47.498458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.523 [2024-12-14 03:21:47.511630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.523 [2024-12-14 03:21:47.511649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.523 [2024-12-14 03:21:47.526006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.523 [2024-12-14 03:21:47.526024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.523 [2024-12-14 03:21:47.539849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.523 [2024-12-14 03:21:47.539868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.523 [2024-12-14 03:21:47.554612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.523 [2024-12-14 03:21:47.554631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.523 [2024-12-14 03:21:47.565565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.523 [2024-12-14 03:21:47.565584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.523 [2024-12-14 03:21:47.579628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.523 [2024-12-14 03:21:47.579647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.523 [2024-12-14 03:21:47.594218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.523 [2024-12-14 03:21:47.594238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.523 [2024-12-14 03:21:47.605451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.523 [2024-12-14 03:21:47.605470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.523 [2024-12-14 03:21:47.620121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.523 [2024-12-14 03:21:47.620144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.523 [2024-12-14 03:21:47.634677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.523 [2024-12-14 03:21:47.634696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.523 [2024-12-14 03:21:47.650116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.523 [2024-12-14 03:21:47.650136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.782 [2024-12-14 03:21:47.662615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.782 [2024-12-14 03:21:47.662635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.782 [2024-12-14 03:21:47.675260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.782 [2024-12-14 03:21:47.675279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.782 [2024-12-14 03:21:47.689909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.782 [2024-12-14 03:21:47.689928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.782 [2024-12-14 03:21:47.702611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.782 [2024-12-14 03:21:47.702630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.782 [2024-12-14 03:21:47.715567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.782 [2024-12-14 03:21:47.715587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.782 [2024-12-14 03:21:47.729777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.782 [2024-12-14 03:21:47.729796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.782 [2024-12-14 03:21:47.741586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.782 [2024-12-14 03:21:47.741604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.782 [2024-12-14 03:21:47.755645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.782 [2024-12-14 03:21:47.755664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.782 [2024-12-14 03:21:47.766442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.782 [2024-12-14 03:21:47.766459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.782 [2024-12-14 03:21:47.779633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.782 [2024-12-14 03:21:47.779652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.783 [2024-12-14 03:21:47.794228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.783 [2024-12-14 03:21:47.794248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.783 [2024-12-14 03:21:47.805802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.783 [2024-12-14 03:21:47.805822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.783 [2024-12-14 03:21:47.819866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.783 [2024-12-14 03:21:47.819885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.783 [2024-12-14 03:21:47.834421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.783 [2024-12-14 03:21:47.834439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.783 [2024-12-14 03:21:47.850319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.783 [2024-12-14 03:21:47.850338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.783 [2024-12-14 03:21:47.865885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.783 [2024-12-14 03:21:47.865907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.783 [2024-12-14 03:21:47.879513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.783 [2024-12-14 03:21:47.879539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.783 [2024-12-14 03:21:47.893907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.783 [2024-12-14 03:21:47.893927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.783 [2024-12-14 03:21:47.905356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.783 [2024-12-14 03:21:47.905375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.042 [2024-12-14 03:21:47.920023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.042 [2024-12-14 03:21:47.920042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.042 [2024-12-14 03:21:47.934379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.042 [2024-12-14 03:21:47.934398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.042 [2024-12-14 03:21:47.946486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.042 [2024-12-14 03:21:47.946505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.042 [2024-12-14 03:21:47.959578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.042 [2024-12-14 03:21:47.959597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.042 [2024-12-14 03:21:47.974669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.042 [2024-12-14 03:21:47.974688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.042 [2024-12-14 03:21:47.989972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.042 [2024-12-14 03:21:47.989992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.042 [2024-12-14 03:21:48.004141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.042 [2024-12-14 03:21:48.004160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.042 [2024-12-14 03:21:48.018815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.042 [2024-12-14 03:21:48.018834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.042 [2024-12-14 03:21:48.033643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.042 [2024-12-14 03:21:48.033662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.042 [2024-12-14 03:21:48.047958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.042 [2024-12-14 03:21:48.047977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.042 [2024-12-14 03:21:48.062507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.042 [2024-12-14 03:21:48.062526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.042 [2024-12-14 03:21:48.077447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.042 [2024-12-14 03:21:48.077472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.042 [2024-12-14 03:21:48.091845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.042 [2024-12-14 03:21:48.091864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.042 [2024-12-14 03:21:48.106210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.042 [2024-12-14 03:21:48.106229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.042 [2024-12-14 03:21:48.117327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.042 [2024-12-14 03:21:48.117346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.042 16850.20 IOPS, 131.64 MiB/s [2024-12-14T02:21:48.175Z] [2024-12-14 03:21:48.131021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.042 [2024-12-14 03:21:48.131041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.042 00:40:33.042 Latency(us) 00:40:33.042 [2024-12-14T02:21:48.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:33.042 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:40:33.042 Nvme1n1 : 5.01 16853.09 131.66 0.00 0.00 7587.77 1833.45 12857.54 00:40:33.042 [2024-12-14T02:21:48.175Z] =================================================================================================================== 00:40:33.042 [2024-12-14T02:21:48.175Z] Total : 16853.09 131.66 0.00 0.00 7587.77 1833.45 12857.54 00:40:33.042 [2024-12-14 03:21:48.141966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.042 [2024-12-14 03:21:48.141985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.042 [2024-12-14 03:21:48.153967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.042 [2024-12-14 03:21:48.153983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.042 [2024-12-14 03:21:48.165977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.042 [2024-12-14 03:21:48.165998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.302 [2024-12-14 03:21:48.177970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.302 [2024-12-14 03:21:48.177986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.302 [2024-12-14 03:21:48.189972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.302 [2024-12-14 03:21:48.189991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.302 [2024-12-14 03:21:48.201966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.302 [2024-12-14 03:21:48.201983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.302 [2024-12-14 03:21:48.213969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.302 [2024-12-14 03:21:48.213992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.302 [2024-12-14 03:21:48.225969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.302 [2024-12-14 03:21:48.225985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.302 [2024-12-14 03:21:48.237967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.302 [2024-12-14 03:21:48.237984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.302 [2024-12-14 03:21:48.249958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.302 [2024-12-14 03:21:48.249969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.302 [2024-12-14 03:21:48.261963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.302 [2024-12-14 03:21:48.261976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.302 [2024-12-14 03:21:48.273961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.302 [2024-12-14 03:21:48.273973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.302 [2024-12-14 03:21:48.285961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.302 [2024-12-14 03:21:48.285972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (417006) - No such process 00:40:33.302 03:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 417006 00:40:33.302 03:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:33.302 03:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.302 03:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:33.302 03:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.302 03:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:40:33.302 03:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.302 03:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:33.302 delay0 00:40:33.302 03:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.302 03:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:40:33.302 03:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.302 03:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:33.302 03:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.302 03:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:40:33.302 [2024-12-14 03:21:48.432056] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:40:39.869 Initializing NVMe Controllers 00:40:39.869 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:39.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:40:39.869 Initialization complete. Launching workers. 00:40:39.869 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 874 00:40:39.869 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1153, failed to submit 41 00:40:39.869 success 1004, unsuccessful 149, failed 0 00:40:39.869 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:40:39.869 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:40:39.869 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:39.869 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:40:39.869 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:39.869 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:40:39.869 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:39.869 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:39.869 rmmod nvme_tcp 00:40:39.869 rmmod nvme_fabrics 00:40:39.869 rmmod nvme_keyring 00:40:39.869 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:39.869 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:40:39.869 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:40:39.869 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 416853 ']' 00:40:39.870 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 416853 00:40:39.870 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 416853 ']' 00:40:39.870 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 416853 00:40:39.870 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:40:39.870 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:39.870 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 416853 00:40:39.870 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:39.870 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:39.870 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 416853' 00:40:39.870 killing process with pid 416853 00:40:39.870 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 416853 00:40:39.870 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 416853 00:40:39.870 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:39.870 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:39.870 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:39.870 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:40:39.870 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:40:39.870 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:39.870 03:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:40:40.128 03:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:40.129 03:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:40.129 03:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:40.129 03:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:40.129 03:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:42.033 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:42.033 00:40:42.033 real 0m31.437s 00:40:42.033 user 0m41.132s 00:40:42.033 sys 0m11.946s 00:40:42.033 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:42.033 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:42.033 ************************************ 00:40:42.033 END TEST nvmf_zcopy 00:40:42.033 ************************************ 00:40:42.033 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:42.033 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:42.033 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:42.033 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:42.033 ************************************ 00:40:42.033 START TEST nvmf_nmic 00:40:42.033 ************************************ 00:40:42.033 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:42.293 * Looking for test storage... 00:40:42.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:42.293 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:42.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:42.293 --rc genhtml_branch_coverage=1 00:40:42.293 --rc genhtml_function_coverage=1 00:40:42.293 --rc genhtml_legend=1 00:40:42.293 --rc geninfo_all_blocks=1 00:40:42.293 --rc geninfo_unexecuted_blocks=1 00:40:42.293 00:40:42.293 ' 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:42.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:42.294 --rc genhtml_branch_coverage=1 00:40:42.294 --rc genhtml_function_coverage=1 00:40:42.294 --rc genhtml_legend=1 00:40:42.294 --rc geninfo_all_blocks=1 00:40:42.294 --rc geninfo_unexecuted_blocks=1 00:40:42.294 00:40:42.294 ' 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:42.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:42.294 --rc genhtml_branch_coverage=1 00:40:42.294 --rc genhtml_function_coverage=1 00:40:42.294 --rc genhtml_legend=1 00:40:42.294 --rc geninfo_all_blocks=1 00:40:42.294 --rc geninfo_unexecuted_blocks=1 00:40:42.294 00:40:42.294 ' 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:42.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:42.294 --rc genhtml_branch_coverage=1 00:40:42.294 --rc genhtml_function_coverage=1 00:40:42.294 --rc genhtml_legend=1 00:40:42.294 --rc geninfo_all_blocks=1 00:40:42.294 --rc geninfo_unexecuted_blocks=1 00:40:42.294 00:40:42.294 ' 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:40:42.294 03:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:48.865 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:48.866 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:48.866 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:48.866 Found net devices under 0000:af:00.0: cvl_0_0 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:48.866 Found net devices under 0000:af:00.1: cvl_0_1 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:48.866 03:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:48.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:48.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:40:48.866 00:40:48.866 --- 10.0.0.2 ping statistics --- 00:40:48.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:48.866 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:48.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:48.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:40:48.866 00:40:48.866 --- 10.0.0.1 ping statistics --- 00:40:48.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:48.866 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=419384 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 419384 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 419384 ']' 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:48.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:48.866 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.866 [2024-12-14 03:22:03.311216] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:48.866 [2024-12-14 03:22:03.312129] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:48.866 [2024-12-14 03:22:03.312163] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:48.867 [2024-12-14 03:22:03.391052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:48.867 [2024-12-14 03:22:03.413918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:48.867 [2024-12-14 03:22:03.413956] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:48.867 [2024-12-14 03:22:03.413963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:48.867 [2024-12-14 03:22:03.413968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:48.867 [2024-12-14 03:22:03.413974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:48.867 [2024-12-14 03:22:03.415395] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:48.867 [2024-12-14 03:22:03.415503] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:48.867 [2024-12-14 03:22:03.415588] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:48.867 [2024-12-14 03:22:03.415590] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:48.867 [2024-12-14 03:22:03.477961] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:48.867 [2024-12-14 03:22:03.479083] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:48.867 [2024-12-14 03:22:03.479275] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:48.867 [2024-12-14 03:22:03.479609] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:48.867 [2024-12-14 03:22:03.479632] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.867 [2024-12-14 03:22:03.556381] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.867 Malloc0 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.867 [2024-12-14 03:22:03.636421] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:40:48.867 test case1: single bdev can't be used in multiple subsystems 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.867 [2024-12-14 03:22:03.664053] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:40:48.867 [2024-12-14 03:22:03.664073] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:40:48.867 [2024-12-14 03:22:03.664081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:48.867 request: 00:40:48.867 { 00:40:48.867 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:40:48.867 "namespace": { 00:40:48.867 "bdev_name": "Malloc0", 00:40:48.867 "no_auto_visible": false, 00:40:48.867 "hide_metadata": false 00:40:48.867 }, 00:40:48.867 "method": "nvmf_subsystem_add_ns", 00:40:48.867 "req_id": 1 00:40:48.867 } 00:40:48.867 Got JSON-RPC error response 00:40:48.867 response: 00:40:48.867 { 00:40:48.867 "code": -32602, 00:40:48.867 "message": "Invalid parameters" 00:40:48.867 } 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:40:48.867 Adding namespace failed - expected result. 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:40:48.867 test case2: host connect to nvmf target in multiple paths 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.867 [2024-12-14 03:22:03.676120] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:48.867 03:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:40:49.126 03:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:40:49.126 03:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:40:49.126 03:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:49.126 03:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:49.126 03:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:40:51.659 03:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:51.659 03:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:51.659 03:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:51.659 03:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:51.659 03:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:51.659 03:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:40:51.659 03:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:51.659 [global] 00:40:51.659 thread=1 00:40:51.659 invalidate=1 00:40:51.659 rw=write 00:40:51.659 time_based=1 00:40:51.659 runtime=1 00:40:51.659 ioengine=libaio 00:40:51.659 direct=1 00:40:51.659 bs=4096 00:40:51.659 iodepth=1 00:40:51.659 norandommap=0 00:40:51.659 numjobs=1 00:40:51.659 00:40:51.659 verify_dump=1 00:40:51.659 verify_backlog=512 00:40:51.659 verify_state_save=0 00:40:51.659 do_verify=1 00:40:51.659 verify=crc32c-intel 00:40:51.659 [job0] 00:40:51.659 filename=/dev/nvme0n1 00:40:51.659 Could not set queue depth (nvme0n1) 00:40:51.659 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:51.659 fio-3.35 00:40:51.659 Starting 1 thread 00:40:52.595 00:40:52.595 job0: (groupid=0, jobs=1): err= 0: pid=419566: Sat Dec 14 03:22:07 2024 00:40:52.595 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:40:52.595 slat (nsec): min=7031, max=41107, avg=7966.76, stdev=1501.47 00:40:52.595 clat (usec): min=170, max=295, avg=216.66, stdev=16.17 00:40:52.595 lat (usec): min=191, max=303, avg=224.63, stdev=16.11 00:40:52.595 clat percentiles (usec): 00:40:52.595 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 206], 00:40:52.595 | 30.00th=[ 208], 40.00th=[ 210], 50.00th=[ 210], 60.00th=[ 212], 00:40:52.595 | 70.00th=[ 217], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 249], 00:40:52.595 | 99.00th=[ 255], 99.50th=[ 258], 99.90th=[ 269], 99.95th=[ 273], 00:40:52.595 | 99.99th=[ 297] 00:40:52.595 write: IOPS=2559, BW=10.00MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:40:52.595 slat (usec): min=10, max=23052, avg=20.69, stdev=455.21 00:40:52.595 clat (usec): min=116, max=294, avg=137.84, stdev= 7.18 00:40:52.595 lat (usec): min=135, max=23341, avg=158.53, stdev=458.25 00:40:52.595 clat percentiles (usec): 00:40:52.595 | 1.00th=[ 130], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 135], 00:40:52.595 | 30.00th=[ 137], 40.00th=[ 137], 50.00th=[ 137], 60.00th=[ 139], 00:40:52.595 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 143], 95.00th=[ 145], 00:40:52.595 | 99.00th=[ 161], 99.50th=[ 186], 99.90th=[ 194], 99.95th=[ 289], 00:40:52.595 | 99.99th=[ 293] 00:40:52.595 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:40:52.595 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:40:52.595 lat (usec) : 250=97.83%, 500=2.17% 00:40:52.595 cpu : usr=3.50%, sys=8.80%, ctx=5126, majf=0, minf=1 00:40:52.595 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:52.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:52.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:52.595 issued rwts: total=2560,2562,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:52.595 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:52.595 00:40:52.595 Run status group 0 (all jobs): 00:40:52.595 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:40:52.595 WRITE: bw=10.00MiB/s (10.5MB/s), 10.00MiB/s-10.00MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:40:52.595 00:40:52.595 Disk stats (read/write): 00:40:52.595 nvme0n1: ios=2131/2560, merge=0/0, ticks=1414/317, in_queue=1731, util=98.00% 00:40:52.595 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:52.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:40:52.854 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:52.854 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:40:52.854 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:52.854 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:52.854 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:52.854 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:52.854 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:40:52.854 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:40:52.854 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:40:52.854 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:52.854 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:40:52.854 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:52.854 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:40:52.855 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:52.855 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:52.855 rmmod nvme_tcp 00:40:52.855 rmmod nvme_fabrics 00:40:52.855 rmmod nvme_keyring 00:40:52.855 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:52.855 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:40:52.855 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:40:52.855 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 419384 ']' 00:40:52.855 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 419384 00:40:52.855 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 419384 ']' 00:40:52.855 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 419384 00:40:52.855 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:40:52.855 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:52.855 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 419384 00:40:52.855 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:52.855 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:52.855 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 419384' 00:40:52.855 killing process with pid 419384 00:40:52.855 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 419384 00:40:52.855 03:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 419384 00:40:53.114 03:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:53.114 03:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:53.114 03:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:53.114 03:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:40:53.114 03:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:40:53.114 03:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:53.114 03:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:40:53.114 03:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:53.114 03:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:53.114 03:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:53.114 03:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:53.114 03:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:55.650 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:55.650 00:40:55.650 real 0m13.036s 00:40:55.650 user 0m24.196s 00:40:55.650 sys 0m6.107s 00:40:55.650 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:55.650 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:55.650 ************************************ 00:40:55.650 END TEST nvmf_nmic 00:40:55.650 ************************************ 00:40:55.650 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:55.650 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:55.650 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:55.650 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:55.650 ************************************ 00:40:55.650 START TEST nvmf_fio_target 00:40:55.650 ************************************ 00:40:55.650 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:55.650 * Looking for test storage... 00:40:55.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:55.650 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:55.650 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:40:55.650 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:55.650 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:55.650 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:55.650 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:55.650 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:55.650 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:40:55.650 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:40:55.650 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:40:55.650 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:55.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.651 --rc genhtml_branch_coverage=1 00:40:55.651 --rc genhtml_function_coverage=1 00:40:55.651 --rc genhtml_legend=1 00:40:55.651 --rc geninfo_all_blocks=1 00:40:55.651 --rc geninfo_unexecuted_blocks=1 00:40:55.651 00:40:55.651 ' 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:55.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.651 --rc genhtml_branch_coverage=1 00:40:55.651 --rc genhtml_function_coverage=1 00:40:55.651 --rc genhtml_legend=1 00:40:55.651 --rc geninfo_all_blocks=1 00:40:55.651 --rc geninfo_unexecuted_blocks=1 00:40:55.651 00:40:55.651 ' 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:55.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.651 --rc genhtml_branch_coverage=1 00:40:55.651 --rc genhtml_function_coverage=1 00:40:55.651 --rc genhtml_legend=1 00:40:55.651 --rc geninfo_all_blocks=1 00:40:55.651 --rc geninfo_unexecuted_blocks=1 00:40:55.651 00:40:55.651 ' 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:55.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.651 --rc genhtml_branch_coverage=1 00:40:55.651 --rc genhtml_function_coverage=1 00:40:55.651 --rc genhtml_legend=1 00:40:55.651 --rc geninfo_all_blocks=1 00:40:55.651 --rc geninfo_unexecuted_blocks=1 00:40:55.651 00:40:55.651 ' 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:55.651 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:55.652 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:55.652 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:55.652 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:55.652 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:55.652 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:55.652 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:55.652 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:40:55.652 03:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:00.926 03:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:00.926 03:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:41:00.926 03:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:00.926 03:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:00.926 03:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:00.926 03:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:00.926 03:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:00.926 03:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:41:00.926 03:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:00.926 03:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:41:00.926 03:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:41:00.926 03:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:41:00.926 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:41:00.926 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:41:00.926 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:41:00.926 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:00.926 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:00.926 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:00.926 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:00.926 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:00.926 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:00.926 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:00.926 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:00.926 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:00.926 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:00.926 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:00.926 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:00.926 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:00.926 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:00.926 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:00.926 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:00.926 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:00.926 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:00.926 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:00.927 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:00.927 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:00.927 Found net devices under 0000:af:00.0: cvl_0_0 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:00.927 Found net devices under 0000:af:00.1: cvl_0_1 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:00.927 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:01.186 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:01.186 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:01.186 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:01.186 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:01.186 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:01.186 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:01.445 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:01.445 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:01.445 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:01.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:01.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:41:01.445 00:41:01.445 --- 10.0.0.2 ping statistics --- 00:41:01.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:01.445 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:41:01.445 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:01.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:01.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:41:01.445 00:41:01.445 --- 10.0.0.1 ping statistics --- 00:41:01.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:01.445 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:41:01.445 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:01.445 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:41:01.445 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:01.445 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:01.445 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:01.445 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:01.445 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:01.445 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:01.445 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:01.445 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:41:01.445 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:01.445 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:01.445 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:01.445 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=421811 00:41:01.445 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:01.445 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 421811 00:41:01.446 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 421811 ']' 00:41:01.446 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:01.446 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:01.446 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:01.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:01.446 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:01.446 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:01.446 [2024-12-14 03:22:16.443992] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:01.446 [2024-12-14 03:22:16.444883] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:01.446 [2024-12-14 03:22:16.444915] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:01.446 [2024-12-14 03:22:16.521933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:01.446 [2024-12-14 03:22:16.544433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:01.446 [2024-12-14 03:22:16.544473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:01.446 [2024-12-14 03:22:16.544480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:01.446 [2024-12-14 03:22:16.544497] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:01.446 [2024-12-14 03:22:16.544502] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:01.446 [2024-12-14 03:22:16.545770] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:01.446 [2024-12-14 03:22:16.545875] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:41:01.446 [2024-12-14 03:22:16.545985] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:01.446 [2024-12-14 03:22:16.545987] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:41:01.705 [2024-12-14 03:22:16.608641] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:01.705 [2024-12-14 03:22:16.609773] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:01.705 [2024-12-14 03:22:16.609972] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:01.705 [2024-12-14 03:22:16.610343] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:01.705 [2024-12-14 03:22:16.610380] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:01.705 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:01.705 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:41:01.705 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:01.705 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:01.705 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:01.705 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:01.705 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:01.972 [2024-12-14 03:22:16.846714] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:01.972 03:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:01.972 03:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:41:01.972 03:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:02.241 03:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:41:02.241 03:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:02.514 03:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:41:02.514 03:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:02.807 03:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:41:02.807 03:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:41:03.081 03:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:03.081 03:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:41:03.081 03:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:03.355 03:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:41:03.355 03:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:03.656 03:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:41:03.656 03:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:41:03.656 03:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:03.916 03:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:03.916 03:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:04.175 03:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:04.175 03:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:41:04.433 03:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:04.433 [2024-12-14 03:22:19.490621] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:04.433 03:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:41:04.692 03:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:41:04.951 03:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:05.209 03:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:41:05.209 03:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:41:05.209 03:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:05.209 03:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:41:05.209 03:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:41:05.209 03:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:41:07.114 03:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:07.114 03:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:07.114 03:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:07.114 03:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:41:07.114 03:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:07.114 03:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:41:07.114 03:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:07.372 [global] 00:41:07.373 thread=1 00:41:07.373 invalidate=1 00:41:07.373 rw=write 00:41:07.373 time_based=1 00:41:07.373 runtime=1 00:41:07.373 ioengine=libaio 00:41:07.373 direct=1 00:41:07.373 bs=4096 00:41:07.373 iodepth=1 00:41:07.373 norandommap=0 00:41:07.373 numjobs=1 00:41:07.373 00:41:07.373 verify_dump=1 00:41:07.373 verify_backlog=512 00:41:07.373 verify_state_save=0 00:41:07.373 do_verify=1 00:41:07.373 verify=crc32c-intel 00:41:07.373 [job0] 00:41:07.373 filename=/dev/nvme0n1 00:41:07.373 [job1] 00:41:07.373 filename=/dev/nvme0n2 00:41:07.373 [job2] 00:41:07.373 filename=/dev/nvme0n3 00:41:07.373 [job3] 00:41:07.373 filename=/dev/nvme0n4 00:41:07.373 Could not set queue depth (nvme0n1) 00:41:07.373 Could not set queue depth (nvme0n2) 00:41:07.373 Could not set queue depth (nvme0n3) 00:41:07.373 Could not set queue depth (nvme0n4) 00:41:07.639 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:07.639 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:07.639 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:07.639 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:07.639 fio-3.35 00:41:07.639 Starting 4 threads 00:41:09.026 00:41:09.026 job0: (groupid=0, jobs=1): err= 0: pid=422088: Sat Dec 14 03:22:23 2024 00:41:09.026 read: IOPS=2196, BW=8787KiB/s (8998kB/s)(8796KiB/1001msec) 00:41:09.026 slat (nsec): min=6613, max=43653, avg=7640.50, stdev=1108.67 00:41:09.026 clat (usec): min=189, max=366, avg=230.45, stdev=20.96 00:41:09.026 lat (usec): min=196, max=376, avg=238.09, stdev=20.95 00:41:09.026 clat percentiles (usec): 00:41:09.026 | 1.00th=[ 196], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 208], 00:41:09.026 | 30.00th=[ 212], 40.00th=[ 221], 50.00th=[ 239], 60.00th=[ 243], 00:41:09.026 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 258], 00:41:09.026 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 297], 99.95th=[ 326], 00:41:09.026 | 99.99th=[ 367] 00:41:09.026 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:41:09.026 slat (nsec): min=9978, max=72202, avg=11238.02, stdev=1956.65 00:41:09.026 clat (usec): min=130, max=286, avg=169.45, stdev=30.50 00:41:09.026 lat (usec): min=140, max=297, avg=180.68, stdev=30.62 00:41:09.026 clat percentiles (usec): 00:41:09.026 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:41:09.026 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:41:09.026 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 208], 95.00th=[ 255], 00:41:09.026 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 281], 99.95th=[ 285], 00:41:09.026 | 99.99th=[ 285] 00:41:09.026 bw ( KiB/s): min=11080, max=11080, per=46.89%, avg=11080.00, stdev= 0.00, samples=1 00:41:09.026 iops : min= 2770, max= 2770, avg=2770.00, stdev= 0.00, samples=1 00:41:09.026 lat (usec) : 250=90.17%, 500=9.83% 00:41:09.026 cpu : usr=3.90%, sys=7.50%, ctx=4759, majf=0, minf=1 00:41:09.026 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:09.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.026 issued rwts: total=2199,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:09.026 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:09.026 job1: (groupid=0, jobs=1): err= 0: pid=422089: Sat Dec 14 03:22:23 2024 00:41:09.026 read: IOPS=325, BW=1303KiB/s (1334kB/s)(1304KiB/1001msec) 00:41:09.026 slat (nsec): min=6888, max=24736, avg=9044.54, stdev=3855.96 00:41:09.026 clat (usec): min=229, max=41174, avg=2750.93, stdev=9764.00 00:41:09.026 lat (usec): min=237, max=41182, avg=2759.97, stdev=9766.58 00:41:09.026 clat percentiles (usec): 00:41:09.026 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 239], 20.00th=[ 243], 00:41:09.026 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:41:09.026 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 302], 95.00th=[40633], 00:41:09.026 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:09.026 | 99.99th=[41157] 00:41:09.026 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:41:09.026 slat (nsec): min=9957, max=42135, avg=11359.90, stdev=1979.62 00:41:09.026 clat (usec): min=150, max=355, avg=179.67, stdev=15.40 00:41:09.026 lat (usec): min=160, max=397, avg=191.03, stdev=16.24 00:41:09.026 clat percentiles (usec): 00:41:09.026 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:41:09.026 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 180], 00:41:09.026 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 206], 00:41:09.026 | 99.00th=[ 219], 99.50th=[ 260], 99.90th=[ 355], 99.95th=[ 355], 00:41:09.026 | 99.99th=[ 355] 00:41:09.026 bw ( KiB/s): min= 4096, max= 4096, per=17.33%, avg=4096.00, stdev= 0.00, samples=1 00:41:09.026 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:09.026 lat (usec) : 250=76.61%, 500=21.00% 00:41:09.026 lat (msec) : 50=2.39% 00:41:09.026 cpu : usr=1.10%, sys=0.90%, ctx=838, majf=0, minf=1 00:41:09.026 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:09.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.026 issued rwts: total=326,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:09.026 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:09.026 job2: (groupid=0, jobs=1): err= 0: pid=422090: Sat Dec 14 03:22:23 2024 00:41:09.026 read: IOPS=384, BW=1538KiB/s (1575kB/s)(1540KiB/1001msec) 00:41:09.026 slat (nsec): min=7517, max=28763, avg=9658.35, stdev=3631.47 00:41:09.026 clat (usec): min=214, max=42942, avg=2314.13, stdev=8846.36 00:41:09.026 lat (usec): min=223, max=42970, avg=2323.79, stdev=8849.33 00:41:09.026 clat percentiles (usec): 00:41:09.026 | 1.00th=[ 217], 5.00th=[ 229], 10.00th=[ 239], 20.00th=[ 249], 00:41:09.026 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:41:09.026 | 70.00th=[ 306], 80.00th=[ 330], 90.00th=[ 412], 95.00th=[ 6652], 00:41:09.027 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:41:09.027 | 99.99th=[42730] 00:41:09.027 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:41:09.027 slat (nsec): min=11043, max=38751, avg=12220.43, stdev=1714.26 00:41:09.027 clat (usec): min=162, max=334, avg=187.28, stdev=15.95 00:41:09.027 lat (usec): min=173, max=373, avg=199.50, stdev=16.54 00:41:09.027 clat percentiles (usec): 00:41:09.027 | 1.00th=[ 165], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 176], 00:41:09.027 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:41:09.027 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 208], 95.00th=[ 217], 00:41:09.027 | 99.00th=[ 229], 99.50th=[ 243], 99.90th=[ 334], 99.95th=[ 334], 00:41:09.027 | 99.99th=[ 334] 00:41:09.027 bw ( KiB/s): min= 4096, max= 4096, per=17.33%, avg=4096.00, stdev= 0.00, samples=1 00:41:09.027 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:09.027 lat (usec) : 250=66.56%, 500=31.22% 00:41:09.027 lat (msec) : 10=0.11%, 50=2.12% 00:41:09.027 cpu : usr=0.80%, sys=1.50%, ctx=897, majf=0, minf=1 00:41:09.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:09.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.027 issued rwts: total=385,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:09.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:09.027 job3: (groupid=0, jobs=1): err= 0: pid=422092: Sat Dec 14 03:22:23 2024 00:41:09.027 read: IOPS=1972, BW=7888KiB/s (8078kB/s)(8204KiB/1040msec) 00:41:09.027 slat (nsec): min=4123, max=26278, avg=6173.86, stdev=1384.46 00:41:09.027 clat (usec): min=210, max=40819, avg=285.06, stdev=1264.81 00:41:09.027 lat (usec): min=215, max=40825, avg=291.24, stdev=1264.81 00:41:09.027 clat percentiles (usec): 00:41:09.027 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 239], 00:41:09.027 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:41:09.027 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 255], 95.00th=[ 258], 00:41:09.027 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 338], 99.95th=[40633], 00:41:09.027 | 99.99th=[40633] 00:41:09.027 write: IOPS=2461, BW=9846KiB/s (10.1MB/s)(10.0MiB/1040msec); 0 zone resets 00:41:09.027 slat (nsec): min=4720, max=38875, avg=7162.68, stdev=1974.82 00:41:09.027 clat (usec): min=117, max=365, avg=162.20, stdev=17.76 00:41:09.027 lat (usec): min=123, max=404, avg=169.37, stdev=19.02 00:41:09.027 clat percentiles (usec): 00:41:09.027 | 1.00th=[ 133], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:41:09.027 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:41:09.027 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 192], 00:41:09.027 | 99.00th=[ 241], 99.50th=[ 245], 99.90th=[ 297], 99.95th=[ 334], 00:41:09.027 | 99.99th=[ 367] 00:41:09.027 bw ( KiB/s): min= 9520, max=10960, per=43.33%, avg=10240.00, stdev=1018.23, samples=2 00:41:09.027 iops : min= 2380, max= 2740, avg=2560.00, stdev=254.56, samples=2 00:41:09.027 lat (usec) : 250=85.40%, 500=14.55% 00:41:09.027 lat (msec) : 50=0.04% 00:41:09.027 cpu : usr=0.77%, sys=3.66%, ctx=4611, majf=0, minf=1 00:41:09.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:09.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.027 issued rwts: total=2051,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:09.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:09.027 00:41:09.027 Run status group 0 (all jobs): 00:41:09.027 READ: bw=18.6MiB/s (19.5MB/s), 1303KiB/s-8787KiB/s (1334kB/s-8998kB/s), io=19.4MiB (20.3MB), run=1001-1040msec 00:41:09.027 WRITE: bw=23.1MiB/s (24.2MB/s), 2046KiB/s-9.99MiB/s (2095kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1040msec 00:41:09.027 00:41:09.027 Disk stats (read/write): 00:41:09.027 nvme0n1: ios=2014/2048, merge=0/0, ticks=661/322, in_queue=983, util=99.00% 00:41:09.027 nvme0n2: ios=33/512, merge=0/0, ticks=740/86, in_queue=826, util=86.69% 00:41:09.027 nvme0n3: ios=18/512, merge=0/0, ticks=740/88, in_queue=828, util=88.95% 00:41:09.027 nvme0n4: ios=2030/2048, merge=0/0, ticks=1149/328, in_queue=1477, util=99.16% 00:41:09.027 03:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:41:09.027 [global] 00:41:09.027 thread=1 00:41:09.027 invalidate=1 00:41:09.027 rw=randwrite 00:41:09.027 time_based=1 00:41:09.027 runtime=1 00:41:09.027 ioengine=libaio 00:41:09.027 direct=1 00:41:09.027 bs=4096 00:41:09.027 iodepth=1 00:41:09.027 norandommap=0 00:41:09.027 numjobs=1 00:41:09.027 00:41:09.027 verify_dump=1 00:41:09.027 verify_backlog=512 00:41:09.027 verify_state_save=0 00:41:09.027 do_verify=1 00:41:09.027 verify=crc32c-intel 00:41:09.027 [job0] 00:41:09.027 filename=/dev/nvme0n1 00:41:09.027 [job1] 00:41:09.027 filename=/dev/nvme0n2 00:41:09.027 [job2] 00:41:09.027 filename=/dev/nvme0n3 00:41:09.027 [job3] 00:41:09.027 filename=/dev/nvme0n4 00:41:09.027 Could not set queue depth (nvme0n1) 00:41:09.027 Could not set queue depth (nvme0n2) 00:41:09.027 Could not set queue depth (nvme0n3) 00:41:09.027 Could not set queue depth (nvme0n4) 00:41:09.286 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:09.286 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:09.286 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:09.286 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:09.286 fio-3.35 00:41:09.286 Starting 4 threads 00:41:10.654 00:41:10.654 job0: (groupid=0, jobs=1): err= 0: pid=422245: Sat Dec 14 03:22:25 2024 00:41:10.654 read: IOPS=942, BW=3768KiB/s (3859kB/s)(3772KiB/1001msec) 00:41:10.654 slat (nsec): min=7047, max=23952, avg=8447.31, stdev=2061.66 00:41:10.654 clat (usec): min=198, max=41950, avg=849.97, stdev=4933.38 00:41:10.654 lat (usec): min=207, max=41974, avg=858.42, stdev=4934.93 00:41:10.654 clat percentiles (usec): 00:41:10.654 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 233], 20.00th=[ 237], 00:41:10.654 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:41:10.654 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 265], 00:41:10.654 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:41:10.654 | 99.99th=[42206] 00:41:10.654 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:41:10.654 slat (nsec): min=10345, max=49020, avg=11629.73, stdev=1802.67 00:41:10.654 clat (usec): min=135, max=315, avg=168.28, stdev=21.63 00:41:10.654 lat (usec): min=146, max=364, avg=179.91, stdev=22.19 00:41:10.654 clat percentiles (usec): 00:41:10.654 | 1.00th=[ 141], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 149], 00:41:10.654 | 30.00th=[ 155], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:41:10.654 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 194], 95.00th=[ 210], 00:41:10.654 | 99.00th=[ 233], 99.50th=[ 247], 99.90th=[ 306], 99.95th=[ 314], 00:41:10.654 | 99.99th=[ 314] 00:41:10.654 bw ( KiB/s): min= 4096, max= 4096, per=18.89%, avg=4096.00, stdev= 0.00, samples=1 00:41:10.654 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:10.654 lat (usec) : 250=86.53%, 500=12.76% 00:41:10.654 lat (msec) : 50=0.71% 00:41:10.654 cpu : usr=1.80%, sys=3.00%, ctx=1968, majf=0, minf=1 00:41:10.654 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:10.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:10.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:10.654 issued rwts: total=943,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:10.654 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:10.654 job1: (groupid=0, jobs=1): err= 0: pid=422246: Sat Dec 14 03:22:25 2024 00:41:10.654 read: IOPS=1569, BW=6277KiB/s (6428kB/s)(6484KiB/1033msec) 00:41:10.654 slat (nsec): min=2450, max=35025, avg=8087.80, stdev=1360.33 00:41:10.654 clat (usec): min=196, max=41086, avg=403.80, stdev=2673.53 00:41:10.654 lat (usec): min=204, max=41099, avg=411.89, stdev=2673.86 00:41:10.654 clat percentiles (usec): 00:41:10.654 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 219], 00:41:10.654 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 229], 00:41:10.654 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 251], 00:41:10.654 | 99.00th=[ 281], 99.50th=[ 347], 99.90th=[41157], 99.95th=[41157], 00:41:10.654 | 99.99th=[41157] 00:41:10.654 write: IOPS=1982, BW=7930KiB/s (8121kB/s)(8192KiB/1033msec); 0 zone resets 00:41:10.654 slat (nsec): min=6344, max=37905, avg=11770.83, stdev=1838.69 00:41:10.654 clat (usec): min=131, max=254, avg=161.13, stdev=15.08 00:41:10.654 lat (usec): min=143, max=266, avg=172.90, stdev=15.26 00:41:10.654 clat percentiles (usec): 00:41:10.654 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:41:10.654 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:41:10.654 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 180], 95.00th=[ 188], 00:41:10.654 | 99.00th=[ 219], 99.50th=[ 235], 99.90th=[ 247], 99.95th=[ 255], 00:41:10.654 | 99.99th=[ 255] 00:41:10.654 bw ( KiB/s): min= 4920, max=11464, per=37.78%, avg=8192.00, stdev=4627.31, samples=2 00:41:10.654 iops : min= 1230, max= 2866, avg=2048.00, stdev=1156.83, samples=2 00:41:10.654 lat (usec) : 250=97.60%, 500=2.18%, 750=0.03% 00:41:10.654 lat (msec) : 50=0.19% 00:41:10.654 cpu : usr=2.91%, sys=5.81%, ctx=3672, majf=0, minf=1 00:41:10.654 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:10.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:10.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:10.654 issued rwts: total=1621,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:10.654 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:10.654 job2: (groupid=0, jobs=1): err= 0: pid=422250: Sat Dec 14 03:22:25 2024 00:41:10.654 read: IOPS=1013, BW=4054KiB/s (4151kB/s)(4212KiB/1039msec) 00:41:10.654 slat (nsec): min=6532, max=25050, avg=7674.87, stdev=1792.16 00:41:10.654 clat (usec): min=210, max=41384, avg=705.71, stdev=4329.91 00:41:10.654 lat (usec): min=218, max=41392, avg=713.38, stdev=4330.45 00:41:10.654 clat percentiles (usec): 00:41:10.654 | 1.00th=[ 215], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 229], 00:41:10.654 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:41:10.654 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 265], 00:41:10.654 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:10.654 | 99.99th=[41157] 00:41:10.654 write: IOPS=1478, BW=5913KiB/s (6055kB/s)(6144KiB/1039msec); 0 zone resets 00:41:10.654 slat (nsec): min=9002, max=38800, avg=10250.81, stdev=1667.90 00:41:10.654 clat (usec): min=137, max=331, avg=173.39, stdev=19.37 00:41:10.654 lat (usec): min=147, max=350, avg=183.64, stdev=19.88 00:41:10.654 clat percentiles (usec): 00:41:10.654 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:41:10.654 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:41:10.654 | 70.00th=[ 180], 80.00th=[ 188], 90.00th=[ 200], 95.00th=[ 210], 00:41:10.654 | 99.00th=[ 231], 99.50th=[ 239], 99.90th=[ 322], 99.95th=[ 330], 00:41:10.654 | 99.99th=[ 330] 00:41:10.654 bw ( KiB/s): min= 5224, max= 7064, per=28.34%, avg=6144.00, stdev=1301.08, samples=2 00:41:10.655 iops : min= 1306, max= 1766, avg=1536.00, stdev=325.27, samples=2 00:41:10.655 lat (usec) : 250=92.00%, 500=7.53% 00:41:10.655 lat (msec) : 50=0.46% 00:41:10.655 cpu : usr=1.45%, sys=2.22%, ctx=2589, majf=0, minf=2 00:41:10.655 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:10.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:10.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:10.655 issued rwts: total=1053,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:10.655 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:10.655 job3: (groupid=0, jobs=1): err= 0: pid=422254: Sat Dec 14 03:22:25 2024 00:41:10.655 read: IOPS=950, BW=3803KiB/s (3895kB/s)(3948KiB/1038msec) 00:41:10.655 slat (nsec): min=8254, max=24405, avg=9527.61, stdev=1927.34 00:41:10.655 clat (usec): min=211, max=41468, avg=835.21, stdev=4833.60 00:41:10.655 lat (usec): min=220, max=41477, avg=844.73, stdev=4833.69 00:41:10.655 clat percentiles (usec): 00:41:10.655 | 1.00th=[ 217], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 229], 00:41:10.655 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:41:10.655 | 70.00th=[ 249], 80.00th=[ 262], 90.00th=[ 330], 95.00th=[ 343], 00:41:10.655 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:41:10.655 | 99.99th=[41681] 00:41:10.655 write: IOPS=986, BW=3946KiB/s (4041kB/s)(4096KiB/1038msec); 0 zone resets 00:41:10.655 slat (nsec): min=11544, max=41869, avg=12764.96, stdev=2171.35 00:41:10.655 clat (usec): min=149, max=391, avg=179.78, stdev=17.53 00:41:10.655 lat (usec): min=161, max=403, avg=192.54, stdev=17.84 00:41:10.655 clat percentiles (usec): 00:41:10.655 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:41:10.655 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:41:10.655 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 208], 00:41:10.655 | 99.00th=[ 225], 99.50th=[ 237], 99.90th=[ 306], 99.95th=[ 392], 00:41:10.655 | 99.99th=[ 392] 00:41:10.655 bw ( KiB/s): min= 8192, max= 8192, per=37.78%, avg=8192.00, stdev= 0.00, samples=1 00:41:10.655 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:10.655 lat (usec) : 250=85.88%, 500=13.28%, 750=0.15% 00:41:10.655 lat (msec) : 50=0.70% 00:41:10.655 cpu : usr=2.31%, sys=2.89%, ctx=2011, majf=0, minf=1 00:41:10.655 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:10.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:10.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:10.655 issued rwts: total=987,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:10.655 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:10.655 00:41:10.655 Run status group 0 (all jobs): 00:41:10.655 READ: bw=17.3MiB/s (18.1MB/s), 3768KiB/s-6277KiB/s (3859kB/s-6428kB/s), io=18.0MiB (18.9MB), run=1001-1039msec 00:41:10.655 WRITE: bw=21.2MiB/s (22.2MB/s), 3946KiB/s-7930KiB/s (4041kB/s-8121kB/s), io=22.0MiB (23.1MB), run=1001-1039msec 00:41:10.655 00:41:10.655 Disk stats (read/write): 00:41:10.655 nvme0n1: ios=561/675, merge=0/0, ticks=1485/113, in_queue=1598, util=83.67% 00:41:10.655 nvme0n2: ios=1669/2048, merge=0/0, ticks=580/312, in_queue=892, util=88.75% 00:41:10.655 nvme0n3: ios=1104/1536, merge=0/0, ticks=581/258, in_queue=839, util=92.90% 00:41:10.655 nvme0n4: ios=1031/1024, merge=0/0, ticks=667/169, in_queue=836, util=95.07% 00:41:10.655 03:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:41:10.655 [global] 00:41:10.655 thread=1 00:41:10.655 invalidate=1 00:41:10.655 rw=write 00:41:10.655 time_based=1 00:41:10.655 runtime=1 00:41:10.655 ioengine=libaio 00:41:10.655 direct=1 00:41:10.655 bs=4096 00:41:10.655 iodepth=128 00:41:10.655 norandommap=0 00:41:10.655 numjobs=1 00:41:10.655 00:41:10.655 verify_dump=1 00:41:10.655 verify_backlog=512 00:41:10.655 verify_state_save=0 00:41:10.655 do_verify=1 00:41:10.655 verify=crc32c-intel 00:41:10.655 [job0] 00:41:10.655 filename=/dev/nvme0n1 00:41:10.655 [job1] 00:41:10.655 filename=/dev/nvme0n2 00:41:10.655 [job2] 00:41:10.655 filename=/dev/nvme0n3 00:41:10.655 [job3] 00:41:10.655 filename=/dev/nvme0n4 00:41:10.655 Could not set queue depth (nvme0n1) 00:41:10.655 Could not set queue depth (nvme0n2) 00:41:10.655 Could not set queue depth (nvme0n3) 00:41:10.655 Could not set queue depth (nvme0n4) 00:41:10.912 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:10.912 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:10.912 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:10.912 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:10.912 fio-3.35 00:41:10.912 Starting 4 threads 00:41:12.282 00:41:12.282 job0: (groupid=0, jobs=1): err= 0: pid=422408: Sat Dec 14 03:22:27 2024 00:41:12.282 read: IOPS=3409, BW=13.3MiB/s (14.0MB/s)(13.5MiB/1010msec) 00:41:12.283 slat (nsec): min=1100, max=29582k, avg=96769.28, stdev=853497.98 00:41:12.283 clat (usec): min=1072, max=55580, avg=13156.09, stdev=8626.29 00:41:12.283 lat (usec): min=1078, max=55588, avg=13252.86, stdev=8698.43 00:41:12.283 clat percentiles (usec): 00:41:12.283 | 1.00th=[ 1467], 5.00th=[ 5080], 10.00th=[ 7439], 20.00th=[ 8717], 00:41:12.283 | 30.00th=[ 8979], 40.00th=[ 9634], 50.00th=[10290], 60.00th=[10945], 00:41:12.283 | 70.00th=[13173], 80.00th=[17433], 90.00th=[23462], 95.00th=[32637], 00:41:12.283 | 99.00th=[49546], 99.50th=[52691], 99.90th=[55313], 99.95th=[55837], 00:41:12.283 | 99.99th=[55837] 00:41:12.283 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:41:12.283 slat (usec): min=2, max=11656, avg=164.82, stdev=882.92 00:41:12.283 clat (usec): min=415, max=100032, avg=23098.24, stdev=21357.69 00:41:12.283 lat (usec): min=430, max=100041, avg=23263.06, stdev=21499.75 00:41:12.283 clat percentiles (usec): 00:41:12.283 | 1.00th=[ 1139], 5.00th=[ 4359], 10.00th=[ 6325], 20.00th=[ 7898], 00:41:12.283 | 30.00th=[ 8291], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[ 16581], 00:41:12.283 | 70.00th=[ 31851], 80.00th=[ 42206], 90.00th=[ 52167], 95.00th=[ 65274], 00:41:12.283 | 99.00th=[ 92799], 99.50th=[ 98042], 99.90th=[100140], 99.95th=[100140], 00:41:12.283 | 99.99th=[100140] 00:41:12.283 bw ( KiB/s): min=12176, max=16496, per=21.23%, avg=14336.00, stdev=3054.70, samples=2 00:41:12.283 iops : min= 3044, max= 4124, avg=3584.00, stdev=763.68, samples=2 00:41:12.283 lat (usec) : 500=0.03%, 750=0.11%, 1000=0.09% 00:41:12.283 lat (msec) : 2=1.98%, 4=1.32%, 10=46.36%, 20=24.42%, 50=18.55% 00:41:12.283 lat (msec) : 100=7.06%, 250=0.09% 00:41:12.283 cpu : usr=2.58%, sys=3.67%, ctx=323, majf=0, minf=1 00:41:12.283 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:41:12.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:12.283 issued rwts: total=3444,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:12.283 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:12.283 job1: (groupid=0, jobs=1): err= 0: pid=422409: Sat Dec 14 03:22:27 2024 00:41:12.283 read: IOPS=7118, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1007msec) 00:41:12.283 slat (nsec): min=1274, max=11541k, avg=67159.45, stdev=579646.13 00:41:12.283 clat (usec): min=1756, max=26373, avg=8777.34, stdev=3164.69 00:41:12.283 lat (usec): min=1763, max=28426, avg=8844.50, stdev=3208.64 00:41:12.283 clat percentiles (usec): 00:41:12.283 | 1.00th=[ 3556], 5.00th=[ 6063], 10.00th=[ 6587], 20.00th=[ 7177], 00:41:12.283 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 7898], 00:41:12.283 | 70.00th=[ 8717], 80.00th=[10421], 90.00th=[13042], 95.00th=[14222], 00:41:12.283 | 99.00th=[23462], 99.50th=[25560], 99.90th=[26084], 99.95th=[26084], 00:41:12.283 | 99.99th=[26346] 00:41:12.283 write: IOPS=7441, BW=29.1MiB/s (30.5MB/s)(29.3MiB/1007msec); 0 zone resets 00:41:12.283 slat (usec): min=2, max=25917, avg=62.41, stdev=571.42 00:41:12.283 clat (usec): min=1990, max=42947, avg=8634.92, stdev=4690.11 00:41:12.283 lat (usec): min=1999, max=42980, avg=8697.33, stdev=4729.35 00:41:12.283 clat percentiles (usec): 00:41:12.283 | 1.00th=[ 3163], 5.00th=[ 4621], 10.00th=[ 5407], 20.00th=[ 6259], 00:41:12.283 | 30.00th=[ 6783], 40.00th=[ 7242], 50.00th=[ 7635], 60.00th=[ 7898], 00:41:12.283 | 70.00th=[ 8029], 80.00th=[ 9765], 90.00th=[12125], 95.00th=[17957], 00:41:12.283 | 99.00th=[30802], 99.50th=[32900], 99.90th=[33817], 99.95th=[33817], 00:41:12.283 | 99.99th=[42730] 00:41:12.283 bw ( KiB/s): min=27200, max=31728, per=43.62%, avg=29464.00, stdev=3201.78, samples=2 00:41:12.283 iops : min= 6800, max= 7932, avg=7366.00, stdev=800.44, samples=2 00:41:12.283 lat (msec) : 2=0.12%, 4=1.54%, 10=80.30%, 20=15.28%, 50=2.76% 00:41:12.283 cpu : usr=4.37%, sys=8.85%, ctx=516, majf=0, minf=1 00:41:12.283 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:41:12.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:12.283 issued rwts: total=7168,7494,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:12.283 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:12.283 job2: (groupid=0, jobs=1): err= 0: pid=422411: Sat Dec 14 03:22:27 2024 00:41:12.283 read: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(10.0MiB/1014msec) 00:41:12.283 slat (nsec): min=1807, max=29410k, avg=139507.78, stdev=1112249.55 00:41:12.283 clat (usec): min=7779, max=70052, avg=17438.36, stdev=11057.19 00:41:12.283 lat (usec): min=7787, max=70071, avg=17577.87, stdev=11168.30 00:41:12.283 clat percentiles (usec): 00:41:12.283 | 1.00th=[ 9765], 5.00th=[10683], 10.00th=[11076], 20.00th=[12125], 00:41:12.283 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:41:12.283 | 70.00th=[14222], 80.00th=[17957], 90.00th=[29492], 95.00th=[45351], 00:41:12.283 | 99.00th=[58983], 99.50th=[68682], 99.90th=[69731], 99.95th=[69731], 00:41:12.283 | 99.99th=[69731] 00:41:12.283 write: IOPS=2855, BW=11.2MiB/s (11.7MB/s)(11.3MiB/1014msec); 0 zone resets 00:41:12.283 slat (usec): min=3, max=14436, avg=213.70, stdev=1072.51 00:41:12.283 clat (usec): min=1132, max=106142, avg=28993.68, stdev=24566.76 00:41:12.283 lat (usec): min=1144, max=106168, avg=29207.38, stdev=24714.38 00:41:12.283 clat percentiles (msec): 00:41:12.283 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:41:12.283 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 19], 60.00th=[ 25], 00:41:12.283 | 70.00th=[ 31], 80.00th=[ 41], 90.00th=[ 74], 95.00th=[ 85], 00:41:12.283 | 99.00th=[ 101], 99.50th=[ 106], 99.90th=[ 107], 99.95th=[ 107], 00:41:12.283 | 99.99th=[ 107] 00:41:12.283 bw ( KiB/s): min= 8192, max=13944, per=16.39%, avg=11068.00, stdev=4067.28, samples=2 00:41:12.283 iops : min= 2048, max= 3486, avg=2767.00, stdev=1016.82, samples=2 00:41:12.283 lat (msec) : 2=0.09%, 10=3.67%, 20=61.85%, 50=23.19%, 100=10.67% 00:41:12.283 lat (msec) : 250=0.53% 00:41:12.283 cpu : usr=2.17%, sys=4.64%, ctx=260, majf=0, minf=2 00:41:12.283 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:41:12.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:12.283 issued rwts: total=2560,2895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:12.283 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:12.283 job3: (groupid=0, jobs=1): err= 0: pid=422412: Sat Dec 14 03:22:27 2024 00:41:12.283 read: IOPS=3023, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1016msec) 00:41:12.283 slat (nsec): min=1720, max=17785k, avg=141364.93, stdev=1030284.41 00:41:12.283 clat (usec): min=6560, max=60573, avg=16999.00, stdev=8770.31 00:41:12.283 lat (usec): min=6571, max=60584, avg=17140.36, stdev=8844.60 00:41:12.283 clat percentiles (usec): 00:41:12.283 | 1.00th=[ 9896], 5.00th=[10814], 10.00th=[11076], 20.00th=[11338], 00:41:12.283 | 30.00th=[11600], 40.00th=[12911], 50.00th=[14615], 60.00th=[15139], 00:41:12.283 | 70.00th=[17695], 80.00th=[18482], 90.00th=[27657], 95.00th=[35914], 00:41:12.283 | 99.00th=[54264], 99.50th=[57410], 99.90th=[60556], 99.95th=[60556], 00:41:12.283 | 99.99th=[60556] 00:41:12.283 write: IOPS=3131, BW=12.2MiB/s (12.8MB/s)(12.4MiB/1016msec); 0 zone resets 00:41:12.283 slat (usec): min=2, max=18668, avg=173.16, stdev=923.82 00:41:12.283 clat (usec): min=3504, max=60536, avg=24076.49, stdev=13604.82 00:41:12.283 lat (usec): min=3513, max=60539, avg=24249.65, stdev=13704.17 00:41:12.283 clat percentiles (usec): 00:41:12.283 | 1.00th=[ 7242], 5.00th=[ 8094], 10.00th=[ 9372], 20.00th=[10814], 00:41:12.283 | 30.00th=[14353], 40.00th=[16450], 50.00th=[21103], 60.00th=[25822], 00:41:12.283 | 70.00th=[31065], 80.00th=[37487], 90.00th=[45351], 95.00th=[50594], 00:41:12.283 | 99.00th=[54264], 99.50th=[54789], 99.90th=[55313], 99.95th=[60556], 00:41:12.283 | 99.99th=[60556] 00:41:12.283 bw ( KiB/s): min=12288, max=12288, per=18.19%, avg=12288.00, stdev= 0.00, samples=2 00:41:12.283 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:41:12.283 lat (msec) : 4=0.10%, 10=9.08%, 20=55.84%, 50=31.40%, 100=3.58% 00:41:12.283 cpu : usr=3.15%, sys=4.04%, ctx=275, majf=0, minf=1 00:41:12.283 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:41:12.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:12.283 issued rwts: total=3072,3182,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:12.283 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:12.283 00:41:12.283 Run status group 0 (all jobs): 00:41:12.283 READ: bw=62.5MiB/s (65.5MB/s), 9.86MiB/s-27.8MiB/s (10.3MB/s-29.2MB/s), io=63.5MiB (66.5MB), run=1007-1016msec 00:41:12.283 WRITE: bw=66.0MiB/s (69.2MB/s), 11.2MiB/s-29.1MiB/s (11.7MB/s-30.5MB/s), io=67.0MiB (70.3MB), run=1007-1016msec 00:41:12.283 00:41:12.283 Disk stats (read/write): 00:41:12.284 nvme0n1: ios=3091/3135, merge=0/0, ticks=36024/66646, in_queue=102670, util=100.00% 00:41:12.284 nvme0n2: ios=5651/6086, merge=0/0, ticks=49233/52842, in_queue=102075, util=98.06% 00:41:12.284 nvme0n3: ios=2092/2479, merge=0/0, ticks=22334/36630, in_queue=58964, util=97.77% 00:41:12.284 nvme0n4: ios=2577/2655, merge=0/0, ticks=43072/58904, in_queue=101976, util=97.75% 00:41:12.284 03:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:41:12.284 [global] 00:41:12.284 thread=1 00:41:12.284 invalidate=1 00:41:12.284 rw=randwrite 00:41:12.284 time_based=1 00:41:12.284 runtime=1 00:41:12.284 ioengine=libaio 00:41:12.284 direct=1 00:41:12.284 bs=4096 00:41:12.284 iodepth=128 00:41:12.284 norandommap=0 00:41:12.284 numjobs=1 00:41:12.284 00:41:12.284 verify_dump=1 00:41:12.284 verify_backlog=512 00:41:12.284 verify_state_save=0 00:41:12.284 do_verify=1 00:41:12.284 verify=crc32c-intel 00:41:12.284 [job0] 00:41:12.284 filename=/dev/nvme0n1 00:41:12.284 [job1] 00:41:12.284 filename=/dev/nvme0n2 00:41:12.284 [job2] 00:41:12.284 filename=/dev/nvme0n3 00:41:12.284 [job3] 00:41:12.284 filename=/dev/nvme0n4 00:41:12.284 Could not set queue depth (nvme0n1) 00:41:12.284 Could not set queue depth (nvme0n2) 00:41:12.284 Could not set queue depth (nvme0n3) 00:41:12.284 Could not set queue depth (nvme0n4) 00:41:12.284 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:12.284 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:12.284 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:12.284 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:12.284 fio-3.35 00:41:12.284 Starting 4 threads 00:41:13.654 00:41:13.654 job0: (groupid=0, jobs=1): err= 0: pid=422568: Sat Dec 14 03:22:28 2024 00:41:13.654 read: IOPS=6359, BW=24.8MiB/s (26.0MB/s)(25.0MiB/1005msec) 00:41:13.654 slat (nsec): min=1275, max=8977.8k, avg=78298.53, stdev=624587.95 00:41:13.654 clat (usec): min=4254, max=18836, avg=10375.70, stdev=2474.69 00:41:13.654 lat (usec): min=4256, max=18848, avg=10454.00, stdev=2519.85 00:41:13.654 clat percentiles (usec): 00:41:13.654 | 1.00th=[ 6259], 5.00th=[ 7701], 10.00th=[ 8160], 20.00th=[ 8848], 00:41:13.654 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9896], 00:41:13.654 | 70.00th=[10290], 80.00th=[12125], 90.00th=[14877], 95.00th=[15664], 00:41:13.654 | 99.00th=[17433], 99.50th=[17957], 99.90th=[18482], 99.95th=[18482], 00:41:13.654 | 99.99th=[18744] 00:41:13.654 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:41:13.654 slat (nsec): min=1963, max=8079.2k, avg=68534.26, stdev=499724.77 00:41:13.654 clat (usec): min=1495, max=18516, avg=9186.45, stdev=2246.99 00:41:13.654 lat (usec): min=1508, max=18520, avg=9254.99, stdev=2264.08 00:41:13.654 clat percentiles (usec): 00:41:13.654 | 1.00th=[ 4752], 5.00th=[ 5866], 10.00th=[ 6063], 20.00th=[ 6390], 00:41:13.654 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[ 9896], 00:41:13.654 | 70.00th=[10028], 80.00th=[10290], 90.00th=[12911], 95.00th=[13042], 00:41:13.654 | 99.00th=[13566], 99.50th=[13698], 99.90th=[17695], 99.95th=[18220], 00:41:13.654 | 99.99th=[18482] 00:41:13.654 bw ( KiB/s): min=25880, max=27368, per=35.40%, avg=26624.00, stdev=1052.17, samples=2 00:41:13.654 iops : min= 6470, max= 6842, avg=6656.00, stdev=263.04, samples=2 00:41:13.654 lat (msec) : 2=0.02%, 4=0.19%, 10=65.31%, 20=34.48% 00:41:13.654 cpu : usr=5.78%, sys=7.77%, ctx=444, majf=0, minf=1 00:41:13.654 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:41:13.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:13.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:13.654 issued rwts: total=6391,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:13.654 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:13.654 job1: (groupid=0, jobs=1): err= 0: pid=422569: Sat Dec 14 03:22:28 2024 00:41:13.654 read: IOPS=4505, BW=17.6MiB/s (18.5MB/s)(17.6MiB/1002msec) 00:41:13.654 slat (nsec): min=1402, max=13370k, avg=95828.61, stdev=580640.41 00:41:13.654 clat (usec): min=806, max=42441, avg=11719.20, stdev=5853.83 00:41:13.654 lat (usec): min=3510, max=49618, avg=11815.03, stdev=5911.86 00:41:13.654 clat percentiles (usec): 00:41:13.654 | 1.00th=[ 6587], 5.00th=[ 8094], 10.00th=[ 8586], 20.00th=[ 9241], 00:41:13.654 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:41:13.654 | 70.00th=[10290], 80.00th=[11076], 90.00th=[19530], 95.00th=[26346], 00:41:13.654 | 99.00th=[35914], 99.50th=[37487], 99.90th=[42206], 99.95th=[42206], 00:41:13.654 | 99.99th=[42206] 00:41:13.654 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:41:13.654 slat (nsec): min=1943, max=10674k, avg=117584.00, stdev=560274.91 00:41:13.654 clat (usec): min=4146, max=65017, avg=15979.97, stdev=13564.83 00:41:13.654 lat (usec): min=4150, max=65026, avg=16097.55, stdev=13655.08 00:41:13.654 clat percentiles (usec): 00:41:13.654 | 1.00th=[ 6194], 5.00th=[ 6980], 10.00th=[ 8848], 20.00th=[ 9503], 00:41:13.654 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10159], 00:41:13.654 | 70.00th=[10290], 80.00th=[19268], 90.00th=[44827], 95.00th=[50070], 00:41:13.654 | 99.00th=[55313], 99.50th=[56361], 99.90th=[64750], 99.95th=[64750], 00:41:13.654 | 99.99th=[65274] 00:41:13.654 bw ( KiB/s): min=12272, max=24592, per=24.51%, avg=18432.00, stdev=8711.56, samples=2 00:41:13.654 iops : min= 3068, max= 6148, avg=4608.00, stdev=2177.89, samples=2 00:41:13.654 lat (usec) : 1000=0.01% 00:41:13.654 lat (msec) : 4=0.35%, 10=52.54%, 20=32.34%, 50=12.30%, 100=2.47% 00:41:13.654 cpu : usr=2.80%, sys=5.00%, ctx=644, majf=0, minf=1 00:41:13.654 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:41:13.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:13.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:13.654 issued rwts: total=4515,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:13.654 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:13.654 job2: (groupid=0, jobs=1): err= 0: pid=422570: Sat Dec 14 03:22:28 2024 00:41:13.654 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:41:13.654 slat (nsec): min=1183, max=12502k, avg=115280.40, stdev=749382.10 00:41:13.654 clat (usec): min=4855, max=40067, avg=15423.12, stdev=6646.57 00:41:13.654 lat (usec): min=4865, max=40152, avg=15538.40, stdev=6709.85 00:41:13.654 clat percentiles (usec): 00:41:13.654 | 1.00th=[ 7046], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10290], 00:41:13.654 | 30.00th=[11076], 40.00th=[11731], 50.00th=[13566], 60.00th=[15008], 00:41:13.654 | 70.00th=[16581], 80.00th=[19268], 90.00th=[26346], 95.00th=[31589], 00:41:13.654 | 99.00th=[34866], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:41:13.654 | 99.99th=[40109] 00:41:13.654 write: IOPS=3605, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1006msec); 0 zone resets 00:41:13.654 slat (usec): min=2, max=17959, avg=151.31, stdev=798.01 00:41:13.654 clat (usec): min=3061, max=78349, avg=19792.83, stdev=17680.78 00:41:13.654 lat (usec): min=3511, max=78401, avg=19944.13, stdev=17807.04 00:41:13.654 clat percentiles (usec): 00:41:13.654 | 1.00th=[ 3785], 5.00th=[ 6521], 10.00th=[ 8029], 20.00th=[ 9372], 00:41:13.655 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11207], 60.00th=[11600], 00:41:13.655 | 70.00th=[15926], 80.00th=[27395], 90.00th=[53216], 95.00th=[56886], 00:41:13.655 | 99.00th=[74974], 99.50th=[77071], 99.90th=[78119], 99.95th=[78119], 00:41:13.655 | 99.99th=[78119] 00:41:13.655 bw ( KiB/s): min= 7856, max=20816, per=19.06%, avg=14336.00, stdev=9164.10, samples=2 00:41:13.655 iops : min= 1964, max= 5204, avg=3584.00, stdev=2291.03, samples=2 00:41:13.655 lat (msec) : 4=0.57%, 10=19.96%, 20=55.98%, 50=17.89%, 100=5.60% 00:41:13.655 cpu : usr=2.79%, sys=6.47%, ctx=333, majf=0, minf=1 00:41:13.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:41:13.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:13.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:13.655 issued rwts: total=3584,3627,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:13.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:13.655 job3: (groupid=0, jobs=1): err= 0: pid=422571: Sat Dec 14 03:22:28 2024 00:41:13.655 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:41:13.655 slat (nsec): min=1859, max=10185k, avg=120559.83, stdev=746794.86 00:41:13.655 clat (usec): min=9422, max=29991, avg=15555.33, stdev=3221.08 00:41:13.655 lat (usec): min=9430, max=30130, avg=15675.89, stdev=3274.46 00:41:13.655 clat percentiles (usec): 00:41:13.655 | 1.00th=[10552], 5.00th=[11731], 10.00th=[12518], 20.00th=[13304], 00:41:13.655 | 30.00th=[13698], 40.00th=[13829], 50.00th=[14353], 60.00th=[15139], 00:41:13.655 | 70.00th=[16319], 80.00th=[17695], 90.00th=[20317], 95.00th=[21365], 00:41:13.655 | 99.00th=[24773], 99.50th=[27919], 99.90th=[29754], 99.95th=[29754], 00:41:13.655 | 99.99th=[30016] 00:41:13.655 write: IOPS=4000, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1006msec); 0 zone resets 00:41:13.655 slat (usec): min=2, max=9817, avg=134.16, stdev=755.49 00:41:13.655 clat (usec): min=1076, max=42364, avg=17769.69, stdev=7332.83 00:41:13.655 lat (usec): min=6817, max=42379, avg=17903.85, stdev=7401.04 00:41:13.655 clat percentiles (usec): 00:41:13.655 | 1.00th=[ 7373], 5.00th=[11994], 10.00th=[12780], 20.00th=[13304], 00:41:13.655 | 30.00th=[13698], 40.00th=[13829], 50.00th=[14353], 60.00th=[16188], 00:41:13.655 | 70.00th=[17695], 80.00th=[21365], 90.00th=[31327], 95.00th=[36963], 00:41:13.655 | 99.00th=[39060], 99.50th=[40109], 99.90th=[42206], 99.95th=[42206], 00:41:13.655 | 99.99th=[42206] 00:41:13.655 bw ( KiB/s): min=14792, max=16384, per=20.73%, avg=15588.00, stdev=1125.71, samples=2 00:41:13.655 iops : min= 3698, max= 4096, avg=3897.00, stdev=281.43, samples=2 00:41:13.655 lat (msec) : 2=0.01%, 10=2.16%, 20=78.97%, 50=18.86% 00:41:13.655 cpu : usr=3.68%, sys=6.37%, ctx=281, majf=0, minf=1 00:41:13.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:41:13.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:13.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:13.655 issued rwts: total=3584,4025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:13.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:13.655 00:41:13.655 Run status group 0 (all jobs): 00:41:13.655 READ: bw=70.2MiB/s (73.6MB/s), 13.9MiB/s-24.8MiB/s (14.6MB/s-26.0MB/s), io=70.6MiB (74.0MB), run=1002-1006msec 00:41:13.655 WRITE: bw=73.4MiB/s (77.0MB/s), 14.1MiB/s-25.9MiB/s (14.8MB/s-27.1MB/s), io=73.9MiB (77.5MB), run=1002-1006msec 00:41:13.655 00:41:13.655 Disk stats (read/write): 00:41:13.655 nvme0n1: ios=5427/5632, merge=0/0, ticks=54066/49832, in_queue=103898, util=86.97% 00:41:13.655 nvme0n2: ios=3634/3663, merge=0/0, ticks=16776/27771, in_queue=44547, util=91.27% 00:41:13.655 nvme0n3: ios=3113/3509, merge=0/0, ticks=26346/34052, in_queue=60398, util=96.98% 00:41:13.655 nvme0n4: ios=3094/3359, merge=0/0, ticks=23454/29644, in_queue=53098, util=98.32% 00:41:13.655 03:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:41:13.655 03:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=422586 00:41:13.655 03:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:41:13.655 03:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:41:13.655 [global] 00:41:13.655 thread=1 00:41:13.655 invalidate=1 00:41:13.655 rw=read 00:41:13.655 time_based=1 00:41:13.655 runtime=10 00:41:13.655 ioengine=libaio 00:41:13.655 direct=1 00:41:13.655 bs=4096 00:41:13.655 iodepth=1 00:41:13.655 norandommap=1 00:41:13.655 numjobs=1 00:41:13.655 00:41:13.655 [job0] 00:41:13.655 filename=/dev/nvme0n1 00:41:13.655 [job1] 00:41:13.655 filename=/dev/nvme0n2 00:41:13.655 [job2] 00:41:13.655 filename=/dev/nvme0n3 00:41:13.655 [job3] 00:41:13.655 filename=/dev/nvme0n4 00:41:13.655 Could not set queue depth (nvme0n1) 00:41:13.655 Could not set queue depth (nvme0n2) 00:41:13.655 Could not set queue depth (nvme0n3) 00:41:13.655 Could not set queue depth (nvme0n4) 00:41:13.912 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:13.912 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:13.912 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:13.912 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:13.912 fio-3.35 00:41:13.912 Starting 4 threads 00:41:17.185 03:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:41:17.185 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=44195840, buflen=4096 00:41:17.185 fio: pid=422726, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:17.185 03:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:41:17.185 03:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:17.185 03:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:41:17.185 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=4784128, buflen=4096 00:41:17.185 fio: pid=422724, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:17.185 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1060864, buflen=4096 00:41:17.185 fio: pid=422722, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:17.185 03:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:17.185 03:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:41:17.443 03:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:17.443 03:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:41:17.443 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=356352, buflen=4096 00:41:17.443 fio: pid=422723, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:17.443 00:41:17.443 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=422722: Sat Dec 14 03:22:32 2024 00:41:17.443 read: IOPS=83, BW=334KiB/s (342kB/s)(1036KiB/3105msec) 00:41:17.443 slat (usec): min=6, max=12749, avg=109.47, stdev=1112.06 00:41:17.443 clat (usec): min=190, max=43783, avg=11790.72, stdev=18355.13 00:41:17.443 lat (usec): min=197, max=56533, avg=11851.65, stdev=18456.54 00:41:17.443 clat percentiles (usec): 00:41:17.443 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 210], 20.00th=[ 225], 00:41:17.443 | 30.00th=[ 235], 40.00th=[ 245], 50.00th=[ 255], 60.00th=[ 367], 00:41:17.443 | 70.00th=[ 506], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:41:17.443 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:41:17.443 | 99.99th=[43779] 00:41:17.443 bw ( KiB/s): min= 232, max= 424, per=2.24%, avg=332.60, stdev=72.81, samples=5 00:41:17.443 iops : min= 58, max= 106, avg=83.00, stdev=18.22, samples=5 00:41:17.443 lat (usec) : 250=45.38%, 500=23.08%, 750=2.69% 00:41:17.443 lat (msec) : 10=0.38%, 50=28.08% 00:41:17.443 cpu : usr=0.00%, sys=0.16%, ctx=262, majf=0, minf=1 00:41:17.443 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:17.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.443 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.443 issued rwts: total=260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.443 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:17.443 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=422723: Sat Dec 14 03:22:32 2024 00:41:17.443 read: IOPS=26, BW=105KiB/s (107kB/s)(348KiB/3322msec) 00:41:17.443 slat (usec): min=8, max=13823, avg=178.16, stdev=1471.31 00:41:17.443 clat (usec): min=239, max=41995, avg=37748.05, stdev=11131.04 00:41:17.443 lat (usec): min=253, max=55107, avg=37927.98, stdev=11279.46 00:41:17.443 clat percentiles (usec): 00:41:17.443 | 1.00th=[ 239], 5.00th=[ 363], 10.00th=[40633], 20.00th=[41157], 00:41:17.443 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:17.443 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:17.443 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:17.443 | 99.99th=[42206] 00:41:17.443 bw ( KiB/s): min= 93, max= 128, per=0.72%, avg=106.00, stdev=12.66, samples=6 00:41:17.443 iops : min= 23, max= 32, avg=26.33, stdev= 3.27, samples=6 00:41:17.443 lat (usec) : 250=3.41%, 500=3.41%, 750=1.14% 00:41:17.443 lat (msec) : 50=90.91% 00:41:17.443 cpu : usr=0.00%, sys=0.09%, ctx=90, majf=0, minf=2 00:41:17.443 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:17.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.443 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.443 issued rwts: total=88,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.443 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:17.443 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=422724: Sat Dec 14 03:22:32 2024 00:41:17.443 read: IOPS=396, BW=1585KiB/s (1623kB/s)(4672KiB/2947msec) 00:41:17.443 slat (nsec): min=6972, max=41149, avg=9098.53, stdev=3594.47 00:41:17.443 clat (usec): min=223, max=41092, avg=2493.99, stdev=9266.36 00:41:17.443 lat (usec): min=231, max=41116, avg=2503.08, stdev=9269.58 00:41:17.443 clat percentiles (usec): 00:41:17.443 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 251], 00:41:17.443 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:41:17.443 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[41157], 00:41:17.443 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:17.443 | 99.99th=[41157] 00:41:17.443 bw ( KiB/s): min= 96, max= 8838, per=12.46%, avg=1846.00, stdev=3908.65, samples=5 00:41:17.443 iops : min= 24, max= 2209, avg=461.40, stdev=976.94, samples=5 00:41:17.443 lat (usec) : 250=17.96%, 500=76.39%, 750=0.09% 00:41:17.443 lat (msec) : 50=5.47% 00:41:17.443 cpu : usr=0.20%, sys=0.68%, ctx=1169, majf=0, minf=2 00:41:17.443 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:17.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.443 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.443 issued rwts: total=1169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.443 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:17.443 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=422726: Sat Dec 14 03:22:32 2024 00:41:17.443 read: IOPS=3978, BW=15.5MiB/s (16.3MB/s)(42.1MiB/2712msec) 00:41:17.443 slat (nsec): min=5485, max=43535, avg=8196.13, stdev=1495.82 00:41:17.443 clat (usec): min=176, max=1404, avg=239.07, stdev=37.09 00:41:17.443 lat (usec): min=184, max=1419, avg=247.27, stdev=37.30 00:41:17.443 clat percentiles (usec): 00:41:17.443 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 202], 00:41:17.443 | 30.00th=[ 206], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 251], 00:41:17.443 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 289], 00:41:17.443 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 506], 99.95th=[ 519], 00:41:17.443 | 99.99th=[ 1074] 00:41:17.443 bw ( KiB/s): min=13724, max=18888, per=100.00%, avg=15836.00, stdev=2012.37, samples=5 00:41:17.443 iops : min= 3431, max= 4722, avg=3959.00, stdev=503.09, samples=5 00:41:17.443 lat (usec) : 250=58.78%, 500=41.07%, 750=0.12% 00:41:17.443 lat (msec) : 2=0.02% 00:41:17.443 cpu : usr=3.06%, sys=5.68%, ctx=10791, majf=0, minf=2 00:41:17.443 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:17.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.443 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.443 issued rwts: total=10791,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.443 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:17.443 00:41:17.443 Run status group 0 (all jobs): 00:41:17.443 READ: bw=14.5MiB/s (15.2MB/s), 105KiB/s-15.5MiB/s (107kB/s-16.3MB/s), io=48.1MiB (50.4MB), run=2712-3322msec 00:41:17.443 00:41:17.443 Disk stats (read/write): 00:41:17.443 nvme0n1: ios=245/0, merge=0/0, ticks=2838/0, in_queue=2838, util=95.39% 00:41:17.443 nvme0n2: ios=82/0, merge=0/0, ticks=3080/0, in_queue=3080, util=96.10% 00:41:17.443 nvme0n3: ios=1166/0, merge=0/0, ticks=2827/0, in_queue=2827, util=96.52% 00:41:17.443 nvme0n4: ios=10384/0, merge=0/0, ticks=2373/0, in_queue=2373, util=96.45% 00:41:17.700 03:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:17.700 03:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:41:17.957 03:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:17.958 03:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:41:17.958 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:17.958 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:41:18.215 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:18.215 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:41:18.472 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:41:18.472 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 422586 00:41:18.472 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:41:18.472 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:18.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:18.472 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:18.472 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:41:18.472 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:18.472 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:18.472 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:18.472 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:18.472 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:41:18.472 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:41:18.472 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:41:18.472 nvmf hotplug test: fio failed as expected 00:41:18.472 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:18.729 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:41:18.729 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:41:18.729 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:41:18.729 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:41:18.729 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:41:18.729 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:18.729 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:41:18.729 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:18.729 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:41:18.729 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:18.729 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:18.729 rmmod nvme_tcp 00:41:18.729 rmmod nvme_fabrics 00:41:18.729 rmmod nvme_keyring 00:41:18.988 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:18.988 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:41:18.988 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:41:18.988 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 421811 ']' 00:41:18.988 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 421811 00:41:18.988 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 421811 ']' 00:41:18.988 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 421811 00:41:18.988 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:41:18.988 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:18.988 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 421811 00:41:18.988 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:18.988 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:18.988 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 421811' 00:41:18.988 killing process with pid 421811 00:41:18.988 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 421811 00:41:18.988 03:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 421811 00:41:18.988 03:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:18.988 03:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:18.988 03:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:18.988 03:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:41:18.989 03:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:41:18.989 03:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:18.989 03:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:41:18.989 03:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:18.989 03:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:18.989 03:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:18.989 03:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:18.989 03:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:21.524 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:21.524 00:41:21.524 real 0m25.912s 00:41:21.524 user 1m32.653s 00:41:21.524 sys 0m10.820s 00:41:21.524 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:21.524 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:21.524 ************************************ 00:41:21.524 END TEST nvmf_fio_target 00:41:21.524 ************************************ 00:41:21.524 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:41:21.524 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:21.524 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:21.524 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:21.524 ************************************ 00:41:21.524 START TEST nvmf_bdevio 00:41:21.524 ************************************ 00:41:21.524 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:41:21.524 * Looking for test storage... 00:41:21.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:21.524 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:21.524 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:21.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.525 --rc genhtml_branch_coverage=1 00:41:21.525 --rc genhtml_function_coverage=1 00:41:21.525 --rc genhtml_legend=1 00:41:21.525 --rc geninfo_all_blocks=1 00:41:21.525 --rc geninfo_unexecuted_blocks=1 00:41:21.525 00:41:21.525 ' 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:21.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.525 --rc genhtml_branch_coverage=1 00:41:21.525 --rc genhtml_function_coverage=1 00:41:21.525 --rc genhtml_legend=1 00:41:21.525 --rc geninfo_all_blocks=1 00:41:21.525 --rc geninfo_unexecuted_blocks=1 00:41:21.525 00:41:21.525 ' 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:21.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.525 --rc genhtml_branch_coverage=1 00:41:21.525 --rc genhtml_function_coverage=1 00:41:21.525 --rc genhtml_legend=1 00:41:21.525 --rc geninfo_all_blocks=1 00:41:21.525 --rc geninfo_unexecuted_blocks=1 00:41:21.525 00:41:21.525 ' 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:21.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.525 --rc genhtml_branch_coverage=1 00:41:21.525 --rc genhtml_function_coverage=1 00:41:21.525 --rc genhtml_legend=1 00:41:21.525 --rc geninfo_all_blocks=1 00:41:21.525 --rc geninfo_unexecuted_blocks=1 00:41:21.525 00:41:21.525 ' 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:21.525 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:21.526 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:41:21.526 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:21.526 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:21.526 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:21.526 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:21.526 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:21.526 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:21.526 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:21.526 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:21.526 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:21.526 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:21.526 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:41:21.526 03:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:28.095 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:28.095 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:28.096 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:28.096 Found net devices under 0000:af:00.0: cvl_0_0 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:28.096 Found net devices under 0000:af:00.1: cvl_0_1 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:28.096 03:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:28.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:28.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:41:28.096 00:41:28.096 --- 10.0.0.2 ping statistics --- 00:41:28.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:28.096 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:28.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:28.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:41:28.096 00:41:28.096 --- 10.0.0.1 ping statistics --- 00:41:28.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:28.096 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=425035 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 425035 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 425035 ']' 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:28.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:28.096 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:28.096 [2024-12-14 03:22:42.418001] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:28.096 [2024-12-14 03:22:42.419014] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:28.096 [2024-12-14 03:22:42.419049] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:28.096 [2024-12-14 03:22:42.495367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:28.096 [2024-12-14 03:22:42.517957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:28.096 [2024-12-14 03:22:42.517994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:28.096 [2024-12-14 03:22:42.518001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:28.096 [2024-12-14 03:22:42.518007] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:28.096 [2024-12-14 03:22:42.518012] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:28.096 [2024-12-14 03:22:42.519323] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:41:28.096 [2024-12-14 03:22:42.519427] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:41:28.096 [2024-12-14 03:22:42.519535] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:41:28.096 [2024-12-14 03:22:42.519535] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:41:28.096 [2024-12-14 03:22:42.581976] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:28.096 [2024-12-14 03:22:42.582537] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:28.097 [2024-12-14 03:22:42.583121] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:28.097 [2024-12-14 03:22:42.583257] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:28.097 [2024-12-14 03:22:42.583386] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:28.097 [2024-12-14 03:22:42.648194] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:28.097 Malloc0 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:28.097 [2024-12-14 03:22:42.728399] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:28.097 { 00:41:28.097 "params": { 00:41:28.097 "name": "Nvme$subsystem", 00:41:28.097 "trtype": "$TEST_TRANSPORT", 00:41:28.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:28.097 "adrfam": "ipv4", 00:41:28.097 "trsvcid": "$NVMF_PORT", 00:41:28.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:28.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:28.097 "hdgst": ${hdgst:-false}, 00:41:28.097 "ddgst": ${ddgst:-false} 00:41:28.097 }, 00:41:28.097 "method": "bdev_nvme_attach_controller" 00:41:28.097 } 00:41:28.097 EOF 00:41:28.097 )") 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:41:28.097 03:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:28.097 "params": { 00:41:28.097 "name": "Nvme1", 00:41:28.097 "trtype": "tcp", 00:41:28.097 "traddr": "10.0.0.2", 00:41:28.097 "adrfam": "ipv4", 00:41:28.097 "trsvcid": "4420", 00:41:28.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:28.097 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:28.097 "hdgst": false, 00:41:28.097 "ddgst": false 00:41:28.097 }, 00:41:28.097 "method": "bdev_nvme_attach_controller" 00:41:28.097 }' 00:41:28.097 [2024-12-14 03:22:42.780367] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:28.097 [2024-12-14 03:22:42.780413] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425058 ] 00:41:28.097 [2024-12-14 03:22:42.859829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:28.097 [2024-12-14 03:22:42.884582] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:28.097 [2024-12-14 03:22:42.884701] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:28.097 [2024-12-14 03:22:42.884702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:41:28.097 I/O targets: 00:41:28.097 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:41:28.097 00:41:28.097 00:41:28.097 CUnit - A unit testing framework for C - Version 2.1-3 00:41:28.097 http://cunit.sourceforge.net/ 00:41:28.097 00:41:28.097 00:41:28.097 Suite: bdevio tests on: Nvme1n1 00:41:28.097 Test: blockdev write read block ...passed 00:41:28.355 Test: blockdev write zeroes read block ...passed 00:41:28.355 Test: blockdev write zeroes read no split ...passed 00:41:28.355 Test: blockdev write zeroes read split ...passed 00:41:28.355 Test: blockdev write zeroes read split partial ...passed 00:41:28.355 Test: blockdev reset ...[2024-12-14 03:22:43.275924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:41:28.355 [2024-12-14 03:22:43.275984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb52340 (9): Bad file descriptor 00:41:28.355 [2024-12-14 03:22:43.321146] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:41:28.355 passed 00:41:28.355 Test: blockdev write read 8 blocks ...passed 00:41:28.355 Test: blockdev write read size > 128k ...passed 00:41:28.355 Test: blockdev write read invalid size ...passed 00:41:28.355 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:28.355 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:28.355 Test: blockdev write read max offset ...passed 00:41:28.355 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:28.612 Test: blockdev writev readv 8 blocks ...passed 00:41:28.612 Test: blockdev writev readv 30 x 1block ...passed 00:41:28.612 Test: blockdev writev readv block ...passed 00:41:28.612 Test: blockdev writev readv size > 128k ...passed 00:41:28.612 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:28.612 Test: blockdev comparev and writev ...[2024-12-14 03:22:43.532172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:28.612 [2024-12-14 03:22:43.532205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:28.612 [2024-12-14 03:22:43.532219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:28.612 [2024-12-14 03:22:43.532230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:28.612 [2024-12-14 03:22:43.532530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:28.612 [2024-12-14 03:22:43.532543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:28.612 [2024-12-14 03:22:43.532554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:28.612 [2024-12-14 03:22:43.532561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:28.612 [2024-12-14 03:22:43.532841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:28.612 [2024-12-14 03:22:43.532852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:28.612 [2024-12-14 03:22:43.532864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:28.612 [2024-12-14 03:22:43.532872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:28.612 [2024-12-14 03:22:43.533154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:28.613 [2024-12-14 03:22:43.533166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:28.613 [2024-12-14 03:22:43.533178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:28.613 [2024-12-14 03:22:43.533186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:28.613 passed 00:41:28.613 Test: blockdev nvme passthru rw ...passed 00:41:28.613 Test: blockdev nvme passthru vendor specific ...[2024-12-14 03:22:43.615689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:28.613 [2024-12-14 03:22:43.615707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:28.613 [2024-12-14 03:22:43.615819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:28.613 [2024-12-14 03:22:43.615829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:28.613 [2024-12-14 03:22:43.615934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:28.613 [2024-12-14 03:22:43.615944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:28.613 [2024-12-14 03:22:43.616049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:28.613 [2024-12-14 03:22:43.616059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:28.613 passed 00:41:28.613 Test: blockdev nvme admin passthru ...passed 00:41:28.613 Test: blockdev copy ...passed 00:41:28.613 00:41:28.613 Run Summary: Type Total Ran Passed Failed Inactive 00:41:28.613 suites 1 1 n/a 0 0 00:41:28.613 tests 23 23 23 0 0 00:41:28.613 asserts 152 152 152 0 n/a 00:41:28.613 00:41:28.613 Elapsed time = 1.009 seconds 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:28.872 rmmod nvme_tcp 00:41:28.872 rmmod nvme_fabrics 00:41:28.872 rmmod nvme_keyring 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 425035 ']' 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 425035 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 425035 ']' 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 425035 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 425035 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 425035' 00:41:28.872 killing process with pid 425035 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 425035 00:41:28.872 03:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 425035 00:41:29.132 03:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:29.132 03:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:29.132 03:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:29.132 03:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:41:29.132 03:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:41:29.132 03:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:29.132 03:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:41:29.132 03:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:29.132 03:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:29.132 03:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:29.132 03:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:29.132 03:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:31.035 03:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:31.294 00:41:31.294 real 0m9.927s 00:41:31.294 user 0m8.771s 00:41:31.294 sys 0m5.102s 00:41:31.294 03:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:31.294 03:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:31.294 ************************************ 00:41:31.294 END TEST nvmf_bdevio 00:41:31.294 ************************************ 00:41:31.294 03:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:41:31.294 00:41:31.294 real 4m29.227s 00:41:31.294 user 9m5.768s 00:41:31.294 sys 1m48.422s 00:41:31.294 03:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:31.294 03:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:31.294 ************************************ 00:41:31.294 END TEST nvmf_target_core_interrupt_mode 00:41:31.294 ************************************ 00:41:31.294 03:22:46 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:31.294 03:22:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:31.294 03:22:46 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:31.294 03:22:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:31.294 ************************************ 00:41:31.294 START TEST nvmf_interrupt 00:41:31.294 ************************************ 00:41:31.294 03:22:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:31.294 * Looking for test storage... 00:41:31.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:31.294 03:22:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:31.294 03:22:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:41:31.294 03:22:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:31.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:31.554 --rc genhtml_branch_coverage=1 00:41:31.554 --rc genhtml_function_coverage=1 00:41:31.554 --rc genhtml_legend=1 00:41:31.554 --rc geninfo_all_blocks=1 00:41:31.554 --rc geninfo_unexecuted_blocks=1 00:41:31.554 00:41:31.554 ' 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:31.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:31.554 --rc genhtml_branch_coverage=1 00:41:31.554 --rc genhtml_function_coverage=1 00:41:31.554 --rc genhtml_legend=1 00:41:31.554 --rc geninfo_all_blocks=1 00:41:31.554 --rc geninfo_unexecuted_blocks=1 00:41:31.554 00:41:31.554 ' 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:31.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:31.554 --rc genhtml_branch_coverage=1 00:41:31.554 --rc genhtml_function_coverage=1 00:41:31.554 --rc genhtml_legend=1 00:41:31.554 --rc geninfo_all_blocks=1 00:41:31.554 --rc geninfo_unexecuted_blocks=1 00:41:31.554 00:41:31.554 ' 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:31.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:31.554 --rc genhtml_branch_coverage=1 00:41:31.554 --rc genhtml_function_coverage=1 00:41:31.554 --rc genhtml_legend=1 00:41:31.554 --rc geninfo_all_blocks=1 00:41:31.554 --rc geninfo_unexecuted_blocks=1 00:41:31.554 00:41:31.554 ' 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:31.554 03:22:46 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:41:31.555 03:22:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:38.126 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:38.126 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:38.126 Found net devices under 0000:af:00.0: cvl_0_0 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:38.126 Found net devices under 0000:af:00.1: cvl_0_1 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:38.126 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:38.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:38.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:41:38.127 00:41:38.127 --- 10.0.0.2 ping statistics --- 00:41:38.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:38.127 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:38.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:38.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:41:38.127 00:41:38.127 --- 10.0.0.1 ping statistics --- 00:41:38.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:38.127 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=427331 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 427331 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 427331 ']' 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:38.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:38.127 [2024-12-14 03:22:52.445378] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:38.127 [2024-12-14 03:22:52.446328] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:38.127 [2024-12-14 03:22:52.446365] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:38.127 [2024-12-14 03:22:52.524624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:38.127 [2024-12-14 03:22:52.546451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:38.127 [2024-12-14 03:22:52.546487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:38.127 [2024-12-14 03:22:52.546494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:38.127 [2024-12-14 03:22:52.546500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:38.127 [2024-12-14 03:22:52.546505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:38.127 [2024-12-14 03:22:52.547544] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:38.127 [2024-12-14 03:22:52.547544] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:38.127 [2024-12-14 03:22:52.610520] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:38.127 [2024-12-14 03:22:52.611094] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:38.127 [2024-12-14 03:22:52.611346] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:41:38.127 5000+0 records in 00:41:38.127 5000+0 records out 00:41:38.127 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0173316 s, 591 MB/s 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:38.127 AIO0 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:38.127 [2024-12-14 03:22:52.736219] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:38.127 [2024-12-14 03:22:52.776582] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 427331 0 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 427331 0 idle 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=427331 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 427331 -w 256 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 427331 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.22 reactor_0' 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 427331 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.22 reactor_0 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 427331 1 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 427331 1 idle 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=427331 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:38.127 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:38.128 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:38.128 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:38.128 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:38.128 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:38.128 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:38.128 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 427331 -w 256 00:41:38.128 03:22:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 427336 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 427336 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=427378 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 427331 0 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 427331 0 busy 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=427331 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 427331 -w 256 00:41:38.128 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 427331 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:00.41 reactor_0' 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 427331 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:00.41 reactor_0 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 427331 1 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 427331 1 busy 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=427331 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 427331 -w 256 00:41:38.386 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:38.644 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 427336 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:00.27 reactor_1' 00:41:38.644 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 427336 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:00.27 reactor_1 00:41:38.644 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:38.644 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:38.644 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:41:38.644 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:41:38.644 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:38.644 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:38.644 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:38.644 03:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:38.645 03:22:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 427378 00:41:48.615 Initializing NVMe Controllers 00:41:48.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:48.615 Controller IO queue size 256, less than required. 00:41:48.615 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:48.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:41:48.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:41:48.615 Initialization complete. Launching workers. 00:41:48.615 ======================================================== 00:41:48.615 Latency(us) 00:41:48.615 Device Information : IOPS MiB/s Average min max 00:41:48.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16305.06 63.69 15708.70 3266.91 29765.63 00:41:48.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16527.36 64.56 15493.43 8191.04 26810.04 00:41:48.615 ======================================================== 00:41:48.615 Total : 32832.42 128.25 15600.34 3266.91 29765.63 00:41:48.615 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 427331 0 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 427331 0 idle 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=427331 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 427331 -w 256 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 427331 root 20 0 128.2g 46848 33792 S 6.7 0.1 0:20.22 reactor_0' 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 427331 root 20 0 128.2g 46848 33792 S 6.7 0.1 0:20.22 reactor_0 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 427331 1 00:41:48.615 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 427331 1 idle 00:41:48.616 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=427331 00:41:48.616 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:48.616 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:48.616 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:48.616 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:48.616 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:48.616 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:48.616 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:48.616 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:48.616 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:48.616 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 427331 -w 256 00:41:48.616 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:48.616 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 427336 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1' 00:41:48.616 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 427336 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1 00:41:48.616 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:48.616 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:48.616 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:48.616 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:48.616 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:48.616 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:48.616 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:48.616 03:23:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:48.616 03:23:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:49.184 03:23:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:41:49.184 03:23:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:41:49.184 03:23:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:49.184 03:23:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:41:49.184 03:23:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:41:51.089 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:51.089 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:51.089 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:51.089 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:41:51.089 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:51.089 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:41:51.089 03:23:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:51.089 03:23:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 427331 0 00:41:51.089 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 427331 0 idle 00:41:51.089 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=427331 00:41:51.089 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:51.089 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:51.089 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:51.089 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:51.089 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:51.089 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:51.089 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:51.089 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:51.089 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:51.089 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 427331 -w 256 00:41:51.089 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 427331 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:20.45 reactor_0' 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 427331 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:20.45 reactor_0 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 427331 1 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 427331 1 idle 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=427331 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 427331 -w 256 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 427336 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.08 reactor_1' 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 427336 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.08 reactor_1 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:51.348 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:51.349 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:51.349 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:51.349 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:51.349 03:23:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:51.349 03:23:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:51.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:51.607 03:23:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:51.608 rmmod nvme_tcp 00:41:51.608 rmmod nvme_fabrics 00:41:51.608 rmmod nvme_keyring 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 427331 ']' 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 427331 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 427331 ']' 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 427331 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 427331 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 427331' 00:41:51.608 killing process with pid 427331 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 427331 00:41:51.608 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 427331 00:41:51.867 03:23:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:51.867 03:23:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:51.867 03:23:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:51.867 03:23:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:41:51.867 03:23:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:41:51.867 03:23:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:51.867 03:23:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:41:51.867 03:23:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:51.867 03:23:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:51.867 03:23:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:51.867 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:51.867 03:23:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:54.402 03:23:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:54.402 00:41:54.402 real 0m22.684s 00:41:54.402 user 0m39.576s 00:41:54.402 sys 0m8.317s 00:41:54.402 03:23:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:54.402 03:23:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:54.402 ************************************ 00:41:54.402 END TEST nvmf_interrupt 00:41:54.402 ************************************ 00:41:54.402 00:41:54.402 real 35m24.227s 00:41:54.402 user 87m26.408s 00:41:54.402 sys 13m18.277s 00:41:54.402 03:23:09 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:54.402 03:23:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:54.402 ************************************ 00:41:54.402 END TEST nvmf_tcp 00:41:54.402 ************************************ 00:41:54.402 03:23:09 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:41:54.402 03:23:09 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:54.402 03:23:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:54.402 03:23:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:54.402 03:23:09 -- common/autotest_common.sh@10 -- # set +x 00:41:54.402 ************************************ 00:41:54.402 START TEST spdkcli_nvmf_tcp 00:41:54.402 ************************************ 00:41:54.402 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:54.402 * Looking for test storage... 00:41:54.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:41:54.402 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:54.402 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:41:54.402 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:54.402 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:54.402 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:54.402 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:54.402 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:54.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.403 --rc genhtml_branch_coverage=1 00:41:54.403 --rc genhtml_function_coverage=1 00:41:54.403 --rc genhtml_legend=1 00:41:54.403 --rc geninfo_all_blocks=1 00:41:54.403 --rc geninfo_unexecuted_blocks=1 00:41:54.403 00:41:54.403 ' 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:54.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.403 --rc genhtml_branch_coverage=1 00:41:54.403 --rc genhtml_function_coverage=1 00:41:54.403 --rc genhtml_legend=1 00:41:54.403 --rc geninfo_all_blocks=1 00:41:54.403 --rc geninfo_unexecuted_blocks=1 00:41:54.403 00:41:54.403 ' 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:54.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.403 --rc genhtml_branch_coverage=1 00:41:54.403 --rc genhtml_function_coverage=1 00:41:54.403 --rc genhtml_legend=1 00:41:54.403 --rc geninfo_all_blocks=1 00:41:54.403 --rc geninfo_unexecuted_blocks=1 00:41:54.403 00:41:54.403 ' 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:54.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.403 --rc genhtml_branch_coverage=1 00:41:54.403 --rc genhtml_function_coverage=1 00:41:54.403 --rc genhtml_legend=1 00:41:54.403 --rc geninfo_all_blocks=1 00:41:54.403 --rc geninfo_unexecuted_blocks=1 00:41:54.403 00:41:54.403 ' 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:54.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=427733 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 427733 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 427733 ']' 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:54.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:54.403 [2024-12-14 03:23:09.337301] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:54.403 [2024-12-14 03:23:09.337355] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427733 ] 00:41:54.403 [2024-12-14 03:23:09.408581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:54.403 [2024-12-14 03:23:09.432222] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:54.403 [2024-12-14 03:23:09.432224] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:54.403 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:54.662 03:23:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:41:54.662 03:23:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:41:54.662 03:23:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:41:54.662 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:54.662 03:23:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:54.662 03:23:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:41:54.662 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:41:54.662 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:41:54.662 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:41:54.662 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:41:54.662 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:41:54.662 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:41:54.662 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:54.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:41:54.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:41:54.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:54.662 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:54.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:41:54.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:54.662 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:54.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:41:54.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:54.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:54.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:54.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:54.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:41:54.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:41:54.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:54.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:41:54.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:54.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:41:54.662 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:41:54.662 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:41:54.662 ' 00:41:57.189 [2024-12-14 03:23:12.293503] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:58.560 [2024-12-14 03:23:13.633825] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:42:01.084 [2024-12-14 03:23:16.113365] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:42:03.608 [2024-12-14 03:23:18.276132] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:42:04.980 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:42:04.980 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:42:04.980 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:42:04.980 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:42:04.980 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:42:04.980 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:42:04.980 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:42:04.980 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:04.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:42:04.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:42:04.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:04.980 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:04.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:42:04.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:04.980 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:04.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:42:04.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:04.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:04.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:04.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:04.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:42:04.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:42:04.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:04.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:42:04.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:04.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:42:04.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:42:04.981 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:42:04.981 03:23:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:42:04.981 03:23:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:04.981 03:23:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:04.981 03:23:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:42:04.981 03:23:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:04.981 03:23:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:04.981 03:23:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:42:04.981 03:23:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:42:05.546 03:23:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:42:05.546 03:23:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:42:05.546 03:23:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:42:05.546 03:23:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:05.546 03:23:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:05.546 03:23:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:42:05.546 03:23:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:05.546 03:23:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:05.546 03:23:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:42:05.546 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:42:05.546 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:05.546 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:42:05.546 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:42:05.546 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:42:05.546 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:42:05.546 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:05.546 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:42:05.546 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:42:05.546 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:42:05.546 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:42:05.546 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:42:05.546 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:42:05.546 ' 00:42:12.100 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:42:12.100 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:42:12.100 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:12.100 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:42:12.100 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:42:12.100 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:42:12.100 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:42:12.100 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:12.100 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:42:12.100 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:42:12.100 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:42:12.100 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:42:12.100 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:42:12.100 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 427733 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 427733 ']' 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 427733 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 427733 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 427733' 00:42:12.100 killing process with pid 427733 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 427733 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 427733 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 427733 ']' 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 427733 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 427733 ']' 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 427733 00:42:12.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (427733) - No such process 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 427733 is not found' 00:42:12.100 Process with pid 427733 is not found 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:42:12.100 00:42:12.100 real 0m17.339s 00:42:12.100 user 0m38.200s 00:42:12.100 sys 0m0.877s 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:12.100 03:23:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:12.100 ************************************ 00:42:12.100 END TEST spdkcli_nvmf_tcp 00:42:12.100 ************************************ 00:42:12.100 03:23:26 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:12.100 03:23:26 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:12.100 03:23:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:12.100 03:23:26 -- common/autotest_common.sh@10 -- # set +x 00:42:12.100 ************************************ 00:42:12.100 START TEST nvmf_identify_passthru 00:42:12.100 ************************************ 00:42:12.100 03:23:26 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:12.100 * Looking for test storage... 00:42:12.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:12.101 03:23:26 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:12.101 03:23:26 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:42:12.101 03:23:26 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:12.101 03:23:26 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:42:12.101 03:23:26 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:12.101 03:23:26 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:12.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.101 --rc genhtml_branch_coverage=1 00:42:12.101 --rc genhtml_function_coverage=1 00:42:12.101 --rc genhtml_legend=1 00:42:12.101 --rc geninfo_all_blocks=1 00:42:12.101 --rc geninfo_unexecuted_blocks=1 00:42:12.101 00:42:12.101 ' 00:42:12.101 03:23:26 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:12.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.101 --rc genhtml_branch_coverage=1 00:42:12.101 --rc genhtml_function_coverage=1 00:42:12.101 --rc genhtml_legend=1 00:42:12.101 --rc geninfo_all_blocks=1 00:42:12.101 --rc geninfo_unexecuted_blocks=1 00:42:12.101 00:42:12.101 ' 00:42:12.101 03:23:26 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:12.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.101 --rc genhtml_branch_coverage=1 00:42:12.101 --rc genhtml_function_coverage=1 00:42:12.101 --rc genhtml_legend=1 00:42:12.101 --rc geninfo_all_blocks=1 00:42:12.101 --rc geninfo_unexecuted_blocks=1 00:42:12.101 00:42:12.101 ' 00:42:12.101 03:23:26 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:12.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.101 --rc genhtml_branch_coverage=1 00:42:12.101 --rc genhtml_function_coverage=1 00:42:12.101 --rc genhtml_legend=1 00:42:12.101 --rc geninfo_all_blocks=1 00:42:12.101 --rc geninfo_unexecuted_blocks=1 00:42:12.101 00:42:12.101 ' 00:42:12.101 03:23:26 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:12.101 03:23:26 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.101 03:23:26 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.101 03:23:26 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.101 03:23:26 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:12.101 03:23:26 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:12.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:12.101 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:12.101 03:23:26 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:12.101 03:23:26 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:12.101 03:23:26 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.101 03:23:26 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.101 03:23:26 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.101 03:23:26 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:12.101 03:23:26 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.101 03:23:26 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:42:12.102 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:12.102 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:12.102 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:12.102 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:12.102 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:12.102 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:12.102 03:23:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:12.102 03:23:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:12.102 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:12.102 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:12.102 03:23:26 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:42:12.102 03:23:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:17.378 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:17.378 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:17.379 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:17.379 Found net devices under 0000:af:00.0: cvl_0_0 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:17.379 Found net devices under 0000:af:00.1: cvl_0_1 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:17.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:17.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:42:17.379 00:42:17.379 --- 10.0.0.2 ping statistics --- 00:42:17.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:17.379 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:17.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:17.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:42:17.379 00:42:17.379 --- 10.0.0.1 ping statistics --- 00:42:17.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:17.379 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:17.379 03:23:32 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:17.379 03:23:32 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:42:17.379 03:23:32 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:17.379 03:23:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:17.379 03:23:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:42:17.379 03:23:32 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:42:17.379 03:23:32 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:42:17.379 03:23:32 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:42:17.379 03:23:32 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:42:17.379 03:23:32 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:42:17.379 03:23:32 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:42:17.638 03:23:32 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:42:17.638 03:23:32 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:42:17.638 03:23:32 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:42:17.638 03:23:32 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:42:17.638 03:23:32 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:42:17.638 03:23:32 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:42:17.638 03:23:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:42:17.638 03:23:32 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:42:17.638 03:23:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:42:17.638 03:23:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:42:17.638 03:23:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:42:21.827 03:23:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:42:21.827 03:23:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:42:21.827 03:23:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:42:21.827 03:23:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:42:26.015 03:23:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:42:26.015 03:23:40 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:42:26.015 03:23:40 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:26.015 03:23:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:26.015 03:23:40 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:42:26.015 03:23:40 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:26.015 03:23:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:26.015 03:23:40 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=430318 00:42:26.015 03:23:40 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:42:26.015 03:23:40 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:26.015 03:23:40 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 430318 00:42:26.015 03:23:40 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 430318 ']' 00:42:26.015 03:23:40 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:26.015 03:23:40 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:26.015 03:23:40 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:26.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:26.015 03:23:40 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:26.015 03:23:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:26.015 [2024-12-14 03:23:40.907949] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:42:26.015 [2024-12-14 03:23:40.907993] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:26.015 [2024-12-14 03:23:40.986393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:26.015 [2024-12-14 03:23:41.009518] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:26.015 [2024-12-14 03:23:41.009555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:26.015 [2024-12-14 03:23:41.009562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:26.015 [2024-12-14 03:23:41.009568] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:26.015 [2024-12-14 03:23:41.009573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:26.015 [2024-12-14 03:23:41.010837] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:42:26.015 [2024-12-14 03:23:41.010950] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:42:26.015 [2024-12-14 03:23:41.011057] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:26.015 [2024-12-14 03:23:41.011058] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:42:26.015 03:23:41 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:26.015 03:23:41 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:42:26.015 03:23:41 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:42:26.015 03:23:41 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.015 03:23:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:26.015 INFO: Log level set to 20 00:42:26.015 INFO: Requests: 00:42:26.015 { 00:42:26.015 "jsonrpc": "2.0", 00:42:26.015 "method": "nvmf_set_config", 00:42:26.015 "id": 1, 00:42:26.015 "params": { 00:42:26.015 "admin_cmd_passthru": { 00:42:26.015 "identify_ctrlr": true 00:42:26.015 } 00:42:26.015 } 00:42:26.015 } 00:42:26.015 00:42:26.015 INFO: response: 00:42:26.015 { 00:42:26.015 "jsonrpc": "2.0", 00:42:26.015 "id": 1, 00:42:26.015 "result": true 00:42:26.015 } 00:42:26.015 00:42:26.015 03:23:41 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.015 03:23:41 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:42:26.015 03:23:41 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.015 03:23:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:26.015 INFO: Setting log level to 20 00:42:26.015 INFO: Setting log level to 20 00:42:26.015 INFO: Log level set to 20 00:42:26.015 INFO: Log level set to 20 00:42:26.015 INFO: Requests: 00:42:26.015 { 00:42:26.015 "jsonrpc": "2.0", 00:42:26.015 "method": "framework_start_init", 00:42:26.015 "id": 1 00:42:26.015 } 00:42:26.015 00:42:26.015 INFO: Requests: 00:42:26.015 { 00:42:26.015 "jsonrpc": "2.0", 00:42:26.015 "method": "framework_start_init", 00:42:26.015 "id": 1 00:42:26.015 } 00:42:26.015 00:42:26.015 [2024-12-14 03:23:41.133215] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:42:26.015 INFO: response: 00:42:26.015 { 00:42:26.015 "jsonrpc": "2.0", 00:42:26.015 "id": 1, 00:42:26.015 "result": true 00:42:26.015 } 00:42:26.015 00:42:26.015 INFO: response: 00:42:26.015 { 00:42:26.015 "jsonrpc": "2.0", 00:42:26.015 "id": 1, 00:42:26.015 "result": true 00:42:26.015 } 00:42:26.015 00:42:26.015 03:23:41 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.015 03:23:41 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:26.015 03:23:41 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.015 03:23:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:26.015 INFO: Setting log level to 40 00:42:26.015 INFO: Setting log level to 40 00:42:26.015 INFO: Setting log level to 40 00:42:26.015 [2024-12-14 03:23:41.146460] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:26.272 03:23:41 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.272 03:23:41 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:42:26.272 03:23:41 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:26.272 03:23:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:26.272 03:23:41 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:42:26.272 03:23:41 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.272 03:23:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:29.549 Nvme0n1 00:42:29.549 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.549 03:23:44 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:42:29.549 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.549 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:29.550 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.550 03:23:44 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:42:29.550 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.550 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:29.550 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.550 03:23:44 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:29.550 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.550 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:29.550 [2024-12-14 03:23:44.058620] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:29.550 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.550 03:23:44 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:42:29.550 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.550 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:29.550 [ 00:42:29.550 { 00:42:29.550 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:42:29.550 "subtype": "Discovery", 00:42:29.550 "listen_addresses": [], 00:42:29.550 "allow_any_host": true, 00:42:29.550 "hosts": [] 00:42:29.550 }, 00:42:29.550 { 00:42:29.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:42:29.550 "subtype": "NVMe", 00:42:29.550 "listen_addresses": [ 00:42:29.550 { 00:42:29.550 "trtype": "TCP", 00:42:29.550 "adrfam": "IPv4", 00:42:29.550 "traddr": "10.0.0.2", 00:42:29.550 "trsvcid": "4420" 00:42:29.550 } 00:42:29.550 ], 00:42:29.550 "allow_any_host": true, 00:42:29.550 "hosts": [], 00:42:29.550 "serial_number": "SPDK00000000000001", 00:42:29.550 "model_number": "SPDK bdev Controller", 00:42:29.550 "max_namespaces": 1, 00:42:29.550 "min_cntlid": 1, 00:42:29.550 "max_cntlid": 65519, 00:42:29.550 "namespaces": [ 00:42:29.550 { 00:42:29.550 "nsid": 1, 00:42:29.550 "bdev_name": "Nvme0n1", 00:42:29.550 "name": "Nvme0n1", 00:42:29.550 "nguid": "AF3C35FF931A40C59DD9B1EFF9A59719", 00:42:29.550 "uuid": "af3c35ff-931a-40c5-9dd9-b1eff9a59719" 00:42:29.550 } 00:42:29.550 ] 00:42:29.550 } 00:42:29.550 ] 00:42:29.550 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.550 03:23:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:29.550 03:23:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:42:29.550 03:23:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:42:29.550 03:23:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:42:29.550 03:23:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:42:29.550 03:23:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:29.550 03:23:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:42:29.550 03:23:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:42:29.550 03:23:44 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:42:29.550 03:23:44 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:42:29.550 03:23:44 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:29.550 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.550 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:29.550 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.550 03:23:44 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:42:29.550 03:23:44 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:42:29.550 03:23:44 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:29.550 03:23:44 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:42:29.550 03:23:44 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:29.550 03:23:44 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:42:29.550 03:23:44 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:29.550 03:23:44 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:29.550 rmmod nvme_tcp 00:42:29.550 rmmod nvme_fabrics 00:42:29.550 rmmod nvme_keyring 00:42:29.550 03:23:44 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:29.550 03:23:44 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:42:29.550 03:23:44 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:42:29.550 03:23:44 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 430318 ']' 00:42:29.550 03:23:44 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 430318 00:42:29.550 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 430318 ']' 00:42:29.550 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 430318 00:42:29.550 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:42:29.550 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:29.550 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 430318 00:42:29.808 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:29.808 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:29.808 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 430318' 00:42:29.808 killing process with pid 430318 00:42:29.808 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 430318 00:42:29.808 03:23:44 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 430318 00:42:31.182 03:23:46 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:31.182 03:23:46 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:31.182 03:23:46 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:31.182 03:23:46 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:42:31.182 03:23:46 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:42:31.182 03:23:46 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:31.182 03:23:46 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:42:31.182 03:23:46 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:31.182 03:23:46 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:31.182 03:23:46 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:31.182 03:23:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:31.182 03:23:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:33.087 03:23:48 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:33.346 00:42:33.346 real 0m21.730s 00:42:33.346 user 0m27.981s 00:42:33.346 sys 0m5.238s 00:42:33.346 03:23:48 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:33.346 03:23:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:33.346 ************************************ 00:42:33.346 END TEST nvmf_identify_passthru 00:42:33.346 ************************************ 00:42:33.346 03:23:48 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:33.346 03:23:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:33.346 03:23:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:33.346 03:23:48 -- common/autotest_common.sh@10 -- # set +x 00:42:33.346 ************************************ 00:42:33.346 START TEST nvmf_dif 00:42:33.346 ************************************ 00:42:33.346 03:23:48 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:33.346 * Looking for test storage... 00:42:33.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:33.346 03:23:48 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:33.346 03:23:48 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:42:33.346 03:23:48 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:33.346 03:23:48 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:33.346 03:23:48 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:42:33.346 03:23:48 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:33.346 03:23:48 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:33.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:33.346 --rc genhtml_branch_coverage=1 00:42:33.346 --rc genhtml_function_coverage=1 00:42:33.346 --rc genhtml_legend=1 00:42:33.346 --rc geninfo_all_blocks=1 00:42:33.346 --rc geninfo_unexecuted_blocks=1 00:42:33.346 00:42:33.346 ' 00:42:33.346 03:23:48 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:33.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:33.346 --rc genhtml_branch_coverage=1 00:42:33.346 --rc genhtml_function_coverage=1 00:42:33.346 --rc genhtml_legend=1 00:42:33.346 --rc geninfo_all_blocks=1 00:42:33.346 --rc geninfo_unexecuted_blocks=1 00:42:33.346 00:42:33.346 ' 00:42:33.346 03:23:48 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:33.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:33.346 --rc genhtml_branch_coverage=1 00:42:33.346 --rc genhtml_function_coverage=1 00:42:33.346 --rc genhtml_legend=1 00:42:33.346 --rc geninfo_all_blocks=1 00:42:33.346 --rc geninfo_unexecuted_blocks=1 00:42:33.346 00:42:33.346 ' 00:42:33.346 03:23:48 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:33.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:33.346 --rc genhtml_branch_coverage=1 00:42:33.346 --rc genhtml_function_coverage=1 00:42:33.346 --rc genhtml_legend=1 00:42:33.346 --rc geninfo_all_blocks=1 00:42:33.346 --rc geninfo_unexecuted_blocks=1 00:42:33.346 00:42:33.346 ' 00:42:33.346 03:23:48 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:33.346 03:23:48 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:42:33.347 03:23:48 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:33.347 03:23:48 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:33.347 03:23:48 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:33.347 03:23:48 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:33.347 03:23:48 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:33.347 03:23:48 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:33.347 03:23:48 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:33.347 03:23:48 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:33.347 03:23:48 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:33.606 03:23:48 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:42:33.606 03:23:48 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:33.606 03:23:48 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:33.606 03:23:48 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:33.606 03:23:48 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:33.606 03:23:48 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:33.606 03:23:48 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:33.606 03:23:48 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:42:33.606 03:23:48 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:33.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:33.606 03:23:48 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:42:33.606 03:23:48 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:42:33.606 03:23:48 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:42:33.606 03:23:48 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:42:33.606 03:23:48 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:33.606 03:23:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:33.606 03:23:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:33.606 03:23:48 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:42:33.606 03:23:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:39.041 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:39.041 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:39.041 Found net devices under 0000:af:00.0: cvl_0_0 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:39.041 Found net devices under 0000:af:00.1: cvl_0_1 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:39.041 03:23:54 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:39.042 03:23:54 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:39.042 03:23:54 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:39.042 03:23:54 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:39.042 03:23:54 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:39.042 03:23:54 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:39.042 03:23:54 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:39.042 03:23:54 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:39.042 03:23:54 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:39.042 03:23:54 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:39.042 03:23:54 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:39.042 03:23:54 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:39.042 03:23:54 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:39.042 03:23:54 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:39.301 03:23:54 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:39.301 03:23:54 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:39.301 03:23:54 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:39.301 03:23:54 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:39.301 03:23:54 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:39.301 03:23:54 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:39.301 03:23:54 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:39.301 03:23:54 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:39.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:39.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:42:39.301 00:42:39.301 --- 10.0.0.2 ping statistics --- 00:42:39.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:39.301 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:42:39.301 03:23:54 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:39.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:39.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:42:39.301 00:42:39.301 --- 10.0.0.1 ping statistics --- 00:42:39.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:39.301 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:42:39.301 03:23:54 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:39.301 03:23:54 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:42:39.301 03:23:54 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:42:39.301 03:23:54 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:42.588 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:42:42.588 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:42:42.588 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:42:42.588 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:42:42.588 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:42:42.588 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:42:42.588 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:42:42.588 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:42:42.588 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:42:42.588 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:42:42.588 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:42:42.588 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:42:42.588 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:42:42.588 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:42:42.588 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:42:42.588 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:42:42.588 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:42:42.588 03:23:57 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:42.588 03:23:57 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:42.588 03:23:57 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:42.588 03:23:57 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:42.588 03:23:57 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:42.588 03:23:57 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:42.588 03:23:57 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:42:42.588 03:23:57 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:42:42.588 03:23:57 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:42.588 03:23:57 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:42.588 03:23:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:42.588 03:23:57 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=433438 00:42:42.588 03:23:57 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 433438 00:42:42.588 03:23:57 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:42:42.588 03:23:57 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 433438 ']' 00:42:42.588 03:23:57 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:42.588 03:23:57 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:42.588 03:23:57 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:42.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:42.588 03:23:57 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:42.588 03:23:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:42.588 [2024-12-14 03:23:57.298797] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:42:42.588 [2024-12-14 03:23:57.298841] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:42.588 [2024-12-14 03:23:57.378286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:42.588 [2024-12-14 03:23:57.399199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:42.588 [2024-12-14 03:23:57.399233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:42.588 [2024-12-14 03:23:57.399240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:42.588 [2024-12-14 03:23:57.399246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:42.588 [2024-12-14 03:23:57.399251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:42.588 [2024-12-14 03:23:57.399724] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:42.588 03:23:57 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:42.588 03:23:57 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:42:42.588 03:23:57 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:42.588 03:23:57 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:42.588 03:23:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:42.588 03:23:57 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:42.588 03:23:57 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:42:42.588 03:23:57 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:42:42.588 03:23:57 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.588 03:23:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:42.588 [2024-12-14 03:23:57.525656] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:42.588 03:23:57 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.588 03:23:57 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:42:42.588 03:23:57 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:42.588 03:23:57 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:42.588 03:23:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:42.588 ************************************ 00:42:42.588 START TEST fio_dif_1_default 00:42:42.588 ************************************ 00:42:42.588 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:42:42.588 03:23:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:42:42.588 03:23:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:42:42.588 03:23:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:42:42.588 03:23:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:42:42.588 03:23:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:42:42.588 03:23:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:42.588 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.588 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:42.588 bdev_null0 00:42:42.588 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.588 03:23:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:42.588 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.588 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:42.588 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.588 03:23:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:42.588 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.588 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:42.588 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.588 03:23:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:42.589 [2024-12-14 03:23:57.597953] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:42.589 { 00:42:42.589 "params": { 00:42:42.589 "name": "Nvme$subsystem", 00:42:42.589 "trtype": "$TEST_TRANSPORT", 00:42:42.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:42.589 "adrfam": "ipv4", 00:42:42.589 "trsvcid": "$NVMF_PORT", 00:42:42.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:42.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:42.589 "hdgst": ${hdgst:-false}, 00:42:42.589 "ddgst": ${ddgst:-false} 00:42:42.589 }, 00:42:42.589 "method": "bdev_nvme_attach_controller" 00:42:42.589 } 00:42:42.589 EOF 00:42:42.589 )") 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:42.589 "params": { 00:42:42.589 "name": "Nvme0", 00:42:42.589 "trtype": "tcp", 00:42:42.589 "traddr": "10.0.0.2", 00:42:42.589 "adrfam": "ipv4", 00:42:42.589 "trsvcid": "4420", 00:42:42.589 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:42.589 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:42.589 "hdgst": false, 00:42:42.589 "ddgst": false 00:42:42.589 }, 00:42:42.589 "method": "bdev_nvme_attach_controller" 00:42:42.589 }' 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:42.589 03:23:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:43.155 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:43.155 fio-3.35 00:42:43.155 Starting 1 thread 00:42:55.354 00:42:55.354 filename0: (groupid=0, jobs=1): err= 0: pid=433594: Sat Dec 14 03:24:08 2024 00:42:55.354 read: IOPS=188, BW=754KiB/s (772kB/s)(7568KiB/10041msec) 00:42:55.354 slat (nsec): min=5881, max=32399, avg=6364.27, stdev=1224.42 00:42:55.354 clat (usec): min=388, max=44723, avg=21210.16, stdev=20519.49 00:42:55.354 lat (usec): min=394, max=44748, avg=21216.53, stdev=20519.38 00:42:55.354 clat percentiles (usec): 00:42:55.354 | 1.00th=[ 429], 5.00th=[ 469], 10.00th=[ 482], 20.00th=[ 594], 00:42:55.354 | 30.00th=[ 611], 40.00th=[ 619], 50.00th=[40633], 60.00th=[41157], 00:42:55.354 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:42:55.354 | 99.00th=[42730], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:42:55.354 | 99.99th=[44827] 00:42:55.354 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=755.20, stdev=26.27, samples=20 00:42:55.354 iops : min= 176, max= 192, avg=188.80, stdev= 6.57, samples=20 00:42:55.354 lat (usec) : 500=14.64%, 750=35.04% 00:42:55.354 lat (msec) : 50=50.32% 00:42:55.354 cpu : usr=92.03%, sys=7.73%, ctx=14, majf=0, minf=0 00:42:55.354 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:55.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:55.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:55.354 issued rwts: total=1892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:55.354 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:55.354 00:42:55.354 Run status group 0 (all jobs): 00:42:55.354 READ: bw=754KiB/s (772kB/s), 754KiB/s-754KiB/s (772kB/s-772kB/s), io=7568KiB (7750kB), run=10041-10041msec 00:42:55.354 03:24:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:42:55.354 03:24:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:42:55.354 03:24:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.355 00:42:55.355 real 0m11.101s 00:42:55.355 user 0m16.205s 00:42:55.355 sys 0m1.068s 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:55.355 ************************************ 00:42:55.355 END TEST fio_dif_1_default 00:42:55.355 ************************************ 00:42:55.355 03:24:08 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:42:55.355 03:24:08 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:55.355 03:24:08 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:55.355 03:24:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:55.355 ************************************ 00:42:55.355 START TEST fio_dif_1_multi_subsystems 00:42:55.355 ************************************ 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:55.355 bdev_null0 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:55.355 [2024-12-14 03:24:08.775234] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:55.355 bdev_null1 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:55.355 { 00:42:55.355 "params": { 00:42:55.355 "name": "Nvme$subsystem", 00:42:55.355 "trtype": "$TEST_TRANSPORT", 00:42:55.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:55.355 "adrfam": "ipv4", 00:42:55.355 "trsvcid": "$NVMF_PORT", 00:42:55.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:55.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:55.355 "hdgst": ${hdgst:-false}, 00:42:55.355 "ddgst": ${ddgst:-false} 00:42:55.355 }, 00:42:55.355 "method": "bdev_nvme_attach_controller" 00:42:55.355 } 00:42:55.355 EOF 00:42:55.355 )") 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:55.355 { 00:42:55.355 "params": { 00:42:55.355 "name": "Nvme$subsystem", 00:42:55.355 "trtype": "$TEST_TRANSPORT", 00:42:55.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:55.355 "adrfam": "ipv4", 00:42:55.355 "trsvcid": "$NVMF_PORT", 00:42:55.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:55.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:55.355 "hdgst": ${hdgst:-false}, 00:42:55.355 "ddgst": ${ddgst:-false} 00:42:55.355 }, 00:42:55.355 "method": "bdev_nvme_attach_controller" 00:42:55.355 } 00:42:55.355 EOF 00:42:55.355 )") 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:42:55.355 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:55.355 "params": { 00:42:55.355 "name": "Nvme0", 00:42:55.355 "trtype": "tcp", 00:42:55.355 "traddr": "10.0.0.2", 00:42:55.355 "adrfam": "ipv4", 00:42:55.355 "trsvcid": "4420", 00:42:55.355 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:55.355 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:55.355 "hdgst": false, 00:42:55.356 "ddgst": false 00:42:55.356 }, 00:42:55.356 "method": "bdev_nvme_attach_controller" 00:42:55.356 },{ 00:42:55.356 "params": { 00:42:55.356 "name": "Nvme1", 00:42:55.356 "trtype": "tcp", 00:42:55.356 "traddr": "10.0.0.2", 00:42:55.356 "adrfam": "ipv4", 00:42:55.356 "trsvcid": "4420", 00:42:55.356 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:55.356 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:55.356 "hdgst": false, 00:42:55.356 "ddgst": false 00:42:55.356 }, 00:42:55.356 "method": "bdev_nvme_attach_controller" 00:42:55.356 }' 00:42:55.356 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:55.356 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:55.356 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:55.356 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:55.356 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:55.356 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:55.356 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:55.356 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:55.356 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:55.356 03:24:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:55.356 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:55.356 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:55.356 fio-3.35 00:42:55.356 Starting 2 threads 00:43:05.327 00:43:05.327 filename0: (groupid=0, jobs=1): err= 0: pid=434385: Sat Dec 14 03:24:19 2024 00:43:05.327 read: IOPS=192, BW=771KiB/s (790kB/s)(7712KiB/10001msec) 00:43:05.327 slat (nsec): min=5989, max=29969, avg=7019.08, stdev=1856.26 00:43:05.327 clat (usec): min=385, max=42555, avg=20727.52, stdev=20520.76 00:43:05.327 lat (usec): min=392, max=42562, avg=20734.54, stdev=20520.20 00:43:05.327 clat percentiles (usec): 00:43:05.327 | 1.00th=[ 400], 5.00th=[ 408], 10.00th=[ 412], 20.00th=[ 420], 00:43:05.327 | 30.00th=[ 429], 40.00th=[ 562], 50.00th=[ 644], 60.00th=[41157], 00:43:05.327 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:43:05.327 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:43:05.327 | 99.99th=[42730] 00:43:05.327 bw ( KiB/s): min= 672, max= 832, per=65.61%, avg=773.05, stdev=40.28, samples=19 00:43:05.327 iops : min= 168, max= 208, avg=193.26, stdev=10.07, samples=19 00:43:05.327 lat (usec) : 500=37.45%, 750=12.55%, 1000=0.62% 00:43:05.327 lat (msec) : 50=49.38% 00:43:05.327 cpu : usr=96.52%, sys=3.26%, ctx=8, majf=0, minf=143 00:43:05.328 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:05.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.328 issued rwts: total=1928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.328 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:05.328 filename1: (groupid=0, jobs=1): err= 0: pid=434386: Sat Dec 14 03:24:19 2024 00:43:05.328 read: IOPS=101, BW=408KiB/s (417kB/s)(4080KiB/10009msec) 00:43:05.328 slat (nsec): min=6016, max=25462, avg=7595.91, stdev=2433.53 00:43:05.328 clat (usec): min=363, max=42015, avg=39225.39, stdev=8249.66 00:43:05.328 lat (usec): min=370, max=42027, avg=39232.98, stdev=8249.65 00:43:05.328 clat percentiles (usec): 00:43:05.328 | 1.00th=[ 379], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:43:05.328 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:05.328 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:05.328 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:43:05.328 | 99.99th=[42206] 00:43:05.328 bw ( KiB/s): min= 384, max= 512, per=34.46%, avg=406.40, stdev=33.00, samples=20 00:43:05.328 iops : min= 96, max= 128, avg=101.60, stdev= 8.25, samples=20 00:43:05.328 lat (usec) : 500=4.22%, 750=0.10% 00:43:05.328 lat (msec) : 50=95.69% 00:43:05.328 cpu : usr=96.98%, sys=2.80%, ctx=14, majf=0, minf=113 00:43:05.328 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:05.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.328 issued rwts: total=1020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.328 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:05.328 00:43:05.328 Run status group 0 (all jobs): 00:43:05.328 READ: bw=1178KiB/s (1206kB/s), 408KiB/s-771KiB/s (417kB/s-790kB/s), io=11.5MiB (12.1MB), run=10001-10009msec 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.328 00:43:05.328 real 0m11.442s 00:43:05.328 user 0m26.538s 00:43:05.328 sys 0m0.908s 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:05.328 03:24:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:05.328 ************************************ 00:43:05.328 END TEST fio_dif_1_multi_subsystems 00:43:05.328 ************************************ 00:43:05.328 03:24:20 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:43:05.328 03:24:20 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:05.328 03:24:20 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:05.328 03:24:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:05.328 ************************************ 00:43:05.328 START TEST fio_dif_rand_params 00:43:05.328 ************************************ 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:05.328 bdev_null0 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:05.328 [2024-12-14 03:24:20.288202] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:05.328 { 00:43:05.328 "params": { 00:43:05.328 "name": "Nvme$subsystem", 00:43:05.328 "trtype": "$TEST_TRANSPORT", 00:43:05.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:05.328 "adrfam": "ipv4", 00:43:05.328 "trsvcid": "$NVMF_PORT", 00:43:05.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:05.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:05.328 "hdgst": ${hdgst:-false}, 00:43:05.328 "ddgst": ${ddgst:-false} 00:43:05.328 }, 00:43:05.328 "method": "bdev_nvme_attach_controller" 00:43:05.328 } 00:43:05.328 EOF 00:43:05.328 )") 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:05.328 03:24:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:05.329 03:24:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:05.329 "params": { 00:43:05.329 "name": "Nvme0", 00:43:05.329 "trtype": "tcp", 00:43:05.329 "traddr": "10.0.0.2", 00:43:05.329 "adrfam": "ipv4", 00:43:05.329 "trsvcid": "4420", 00:43:05.329 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:05.329 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:05.329 "hdgst": false, 00:43:05.329 "ddgst": false 00:43:05.329 }, 00:43:05.329 "method": "bdev_nvme_attach_controller" 00:43:05.329 }' 00:43:05.329 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:05.329 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:05.329 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:05.329 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:05.329 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:05.329 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:05.329 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:05.329 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:05.329 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:05.329 03:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:05.587 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:05.587 ... 00:43:05.587 fio-3.35 00:43:05.587 Starting 3 threads 00:43:12.150 00:43:12.150 filename0: (groupid=0, jobs=1): err= 0: pid=434656: Sat Dec 14 03:24:26 2024 00:43:12.150 read: IOPS=325, BW=40.7MiB/s (42.6MB/s)(204MiB/5024msec) 00:43:12.150 slat (nsec): min=6285, max=28839, avg=10639.06, stdev=1949.85 00:43:12.150 clat (usec): min=3567, max=51196, avg=9210.20, stdev=4341.96 00:43:12.150 lat (usec): min=3574, max=51225, avg=9220.84, stdev=4342.23 00:43:12.150 clat percentiles (usec): 00:43:12.150 | 1.00th=[ 4113], 5.00th=[ 6783], 10.00th=[ 7504], 20.00th=[ 7963], 00:43:12.150 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9110], 00:43:12.150 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[10683], 00:43:12.150 | 99.00th=[46924], 99.50th=[47973], 99.90th=[51119], 99.95th=[51119], 00:43:12.150 | 99.99th=[51119] 00:43:12.150 bw ( KiB/s): min=37376, max=44544, per=35.22%, avg=41753.60, stdev=2558.44, samples=10 00:43:12.150 iops : min= 292, max= 348, avg=326.20, stdev=19.99, samples=10 00:43:12.150 lat (msec) : 4=0.73%, 10=83.90%, 20=14.26%, 50=0.92%, 100=0.18% 00:43:12.150 cpu : usr=94.92%, sys=4.80%, ctx=11, majf=0, minf=9 00:43:12.150 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:12.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.150 issued rwts: total=1634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:12.150 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:12.150 filename0: (groupid=0, jobs=1): err= 0: pid=434657: Sat Dec 14 03:24:26 2024 00:43:12.150 read: IOPS=292, BW=36.6MiB/s (38.3MB/s)(184MiB/5043msec) 00:43:12.150 slat (nsec): min=6278, max=25952, avg=11262.28, stdev=1836.55 00:43:12.150 clat (usec): min=5418, max=52682, avg=10214.59, stdev=4258.83 00:43:12.150 lat (usec): min=5425, max=52692, avg=10225.85, stdev=4258.93 00:43:12.150 clat percentiles (usec): 00:43:12.150 | 1.00th=[ 5997], 5.00th=[ 7504], 10.00th=[ 8225], 20.00th=[ 8717], 00:43:12.150 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10159], 00:43:12.150 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11600], 95.00th=[12256], 00:43:12.150 | 99.00th=[46400], 99.50th=[47449], 99.90th=[50070], 99.95th=[52691], 00:43:12.150 | 99.99th=[52691] 00:43:12.150 bw ( KiB/s): min=34304, max=40960, per=31.81%, avg=37708.80, stdev=2264.31, samples=10 00:43:12.150 iops : min= 268, max= 320, avg=294.60, stdev=17.69, samples=10 00:43:12.150 lat (msec) : 10=56.81%, 20=42.03%, 50=1.02%, 100=0.14% 00:43:12.150 cpu : usr=94.47%, sys=5.30%, ctx=6, majf=0, minf=9 00:43:12.150 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:12.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.150 issued rwts: total=1475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:12.150 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:12.150 filename0: (groupid=0, jobs=1): err= 0: pid=434658: Sat Dec 14 03:24:26 2024 00:43:12.150 read: IOPS=309, BW=38.7MiB/s (40.6MB/s)(195MiB/5043msec) 00:43:12.150 slat (nsec): min=6269, max=23486, avg=10863.19, stdev=1912.65 00:43:12.150 clat (usec): min=3484, max=48343, avg=9646.53, stdev=3129.72 00:43:12.150 lat (usec): min=3493, max=48350, avg=9657.39, stdev=3129.81 00:43:12.150 clat percentiles (usec): 00:43:12.150 | 1.00th=[ 3654], 5.00th=[ 6325], 10.00th=[ 7439], 20.00th=[ 8455], 00:43:12.150 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[ 9896], 00:43:12.150 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11469], 95.00th=[12125], 00:43:12.150 | 99.00th=[12911], 99.50th=[43254], 99.90th=[46924], 99.95th=[48497], 00:43:12.150 | 99.99th=[48497] 00:43:12.150 bw ( KiB/s): min=36096, max=45568, per=33.68%, avg=39936.00, stdev=2687.66, samples=10 00:43:12.150 iops : min= 282, max= 356, avg=312.00, stdev=21.00, samples=10 00:43:12.150 lat (msec) : 4=2.75%, 10=58.96%, 20=37.77%, 50=0.51% 00:43:12.150 cpu : usr=94.98%, sys=4.76%, ctx=10, majf=0, minf=9 00:43:12.150 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:12.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.150 issued rwts: total=1562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:12.150 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:12.150 00:43:12.150 Run status group 0 (all jobs): 00:43:12.150 READ: bw=116MiB/s (121MB/s), 36.6MiB/s-40.7MiB/s (38.3MB/s-42.6MB/s), io=584MiB (612MB), run=5024-5043msec 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.150 bdev_null0 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:12.150 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.151 [2024-12-14 03:24:26.417833] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.151 bdev_null1 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.151 bdev_null2 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:12.151 { 00:43:12.151 "params": { 00:43:12.151 "name": "Nvme$subsystem", 00:43:12.151 "trtype": "$TEST_TRANSPORT", 00:43:12.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:12.151 "adrfam": "ipv4", 00:43:12.151 "trsvcid": "$NVMF_PORT", 00:43:12.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:12.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:12.151 "hdgst": ${hdgst:-false}, 00:43:12.151 "ddgst": ${ddgst:-false} 00:43:12.151 }, 00:43:12.151 "method": "bdev_nvme_attach_controller" 00:43:12.151 } 00:43:12.151 EOF 00:43:12.151 )") 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:12.151 { 00:43:12.151 "params": { 00:43:12.151 "name": "Nvme$subsystem", 00:43:12.151 "trtype": "$TEST_TRANSPORT", 00:43:12.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:12.151 "adrfam": "ipv4", 00:43:12.151 "trsvcid": "$NVMF_PORT", 00:43:12.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:12.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:12.151 "hdgst": ${hdgst:-false}, 00:43:12.151 "ddgst": ${ddgst:-false} 00:43:12.151 }, 00:43:12.151 "method": "bdev_nvme_attach_controller" 00:43:12.151 } 00:43:12.151 EOF 00:43:12.151 )") 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:12.151 { 00:43:12.151 "params": { 00:43:12.151 "name": "Nvme$subsystem", 00:43:12.151 "trtype": "$TEST_TRANSPORT", 00:43:12.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:12.151 "adrfam": "ipv4", 00:43:12.151 "trsvcid": "$NVMF_PORT", 00:43:12.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:12.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:12.151 "hdgst": ${hdgst:-false}, 00:43:12.151 "ddgst": ${ddgst:-false} 00:43:12.151 }, 00:43:12.151 "method": "bdev_nvme_attach_controller" 00:43:12.151 } 00:43:12.151 EOF 00:43:12.151 )") 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:12.151 03:24:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:12.151 "params": { 00:43:12.151 "name": "Nvme0", 00:43:12.151 "trtype": "tcp", 00:43:12.151 "traddr": "10.0.0.2", 00:43:12.151 "adrfam": "ipv4", 00:43:12.151 "trsvcid": "4420", 00:43:12.151 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:12.151 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:12.151 "hdgst": false, 00:43:12.151 "ddgst": false 00:43:12.151 }, 00:43:12.151 "method": "bdev_nvme_attach_controller" 00:43:12.151 },{ 00:43:12.151 "params": { 00:43:12.151 "name": "Nvme1", 00:43:12.151 "trtype": "tcp", 00:43:12.151 "traddr": "10.0.0.2", 00:43:12.151 "adrfam": "ipv4", 00:43:12.151 "trsvcid": "4420", 00:43:12.152 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:12.152 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:12.152 "hdgst": false, 00:43:12.152 "ddgst": false 00:43:12.152 }, 00:43:12.152 "method": "bdev_nvme_attach_controller" 00:43:12.152 },{ 00:43:12.152 "params": { 00:43:12.152 "name": "Nvme2", 00:43:12.152 "trtype": "tcp", 00:43:12.152 "traddr": "10.0.0.2", 00:43:12.152 "adrfam": "ipv4", 00:43:12.152 "trsvcid": "4420", 00:43:12.152 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:43:12.152 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:43:12.152 "hdgst": false, 00:43:12.152 "ddgst": false 00:43:12.152 }, 00:43:12.152 "method": "bdev_nvme_attach_controller" 00:43:12.152 }' 00:43:12.152 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:12.152 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:12.152 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:12.152 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:12.152 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:12.152 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:12.152 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:12.152 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:12.152 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:12.152 03:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:12.152 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:12.152 ... 00:43:12.152 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:12.152 ... 00:43:12.152 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:12.152 ... 00:43:12.152 fio-3.35 00:43:12.152 Starting 24 threads 00:43:24.349 00:43:24.349 filename0: (groupid=0, jobs=1): err= 0: pid=434868: Sat Dec 14 03:24:37 2024 00:43:24.349 read: IOPS=560, BW=2242KiB/s (2295kB/s)(21.9MiB/10021msec) 00:43:24.349 slat (nsec): min=7083, max=70646, avg=16803.39, stdev=8575.93 00:43:24.349 clat (usec): min=3901, max=35543, avg=28407.63, stdev=3087.72 00:43:24.349 lat (usec): min=3920, max=35559, avg=28424.44, stdev=3088.13 00:43:24.349 clat percentiles (usec): 00:43:24.349 | 1.00th=[10028], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:43:24.349 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:43:24.349 | 70.00th=[28705], 80.00th=[29492], 90.00th=[30278], 95.00th=[30540], 00:43:24.349 | 99.00th=[31065], 99.50th=[31065], 99.90th=[35390], 99.95th=[35390], 00:43:24.349 | 99.99th=[35390] 00:43:24.349 bw ( KiB/s): min= 2048, max= 2688, per=4.24%, avg=2239.00, stdev=128.28, samples=20 00:43:24.349 iops : min= 512, max= 672, avg=559.60, stdev=32.12, samples=20 00:43:24.349 lat (msec) : 4=0.23%, 10=0.73%, 20=1.60%, 50=97.44% 00:43:24.349 cpu : usr=98.53%, sys=1.03%, ctx=54, majf=0, minf=49 00:43:24.349 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:24.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.349 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.349 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.349 filename0: (groupid=0, jobs=1): err= 0: pid=434869: Sat Dec 14 03:24:37 2024 00:43:24.349 read: IOPS=551, BW=2206KiB/s (2259kB/s)(21.6MiB/10008msec) 00:43:24.349 slat (usec): min=4, max=100, avg=44.24, stdev=16.53 00:43:24.349 clat (usec): min=17502, max=39080, avg=28632.87, stdev=1299.99 00:43:24.349 lat (usec): min=17517, max=39094, avg=28677.10, stdev=1293.76 00:43:24.349 clat percentiles (usec): 00:43:24.349 | 1.00th=[27395], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:43:24.349 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:43:24.349 | 70.00th=[28443], 80.00th=[29754], 90.00th=[30278], 95.00th=[30540], 00:43:24.349 | 99.00th=[31327], 99.50th=[33817], 99.90th=[39060], 99.95th=[39060], 00:43:24.349 | 99.99th=[39060] 00:43:24.349 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2194.89, stdev=87.23, samples=19 00:43:24.349 iops : min= 512, max= 576, avg=548.53, stdev=21.68, samples=19 00:43:24.349 lat (msec) : 20=0.29%, 50=99.71% 00:43:24.349 cpu : usr=98.43%, sys=1.13%, ctx=48, majf=0, minf=26 00:43:24.349 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:24.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.349 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.349 issued rwts: total=5520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.349 filename0: (groupid=0, jobs=1): err= 0: pid=434870: Sat Dec 14 03:24:37 2024 00:43:24.349 read: IOPS=550, BW=2204KiB/s (2256kB/s)(21.5MiB/10002msec) 00:43:24.349 slat (usec): min=6, max=100, avg=45.95, stdev=17.88 00:43:24.349 clat (usec): min=15427, max=64136, avg=28650.70, stdev=2499.29 00:43:24.349 lat (usec): min=15488, max=64154, avg=28696.65, stdev=2496.12 00:43:24.349 clat percentiles (usec): 00:43:24.349 | 1.00th=[21890], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:43:24.349 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:43:24.349 | 70.00th=[28443], 80.00th=[29492], 90.00th=[30278], 95.00th=[30278], 00:43:24.349 | 99.00th=[31851], 99.50th=[34341], 99.90th=[64226], 99.95th=[64226], 00:43:24.349 | 99.99th=[64226] 00:43:24.349 bw ( KiB/s): min= 1920, max= 2304, per=4.15%, avg=2194.05, stdev=104.91, samples=19 00:43:24.349 iops : min= 480, max= 576, avg=548.32, stdev=26.08, samples=19 00:43:24.349 lat (msec) : 20=0.69%, 50=99.02%, 100=0.29% 00:43:24.349 cpu : usr=98.62%, sys=1.02%, ctx=28, majf=0, minf=39 00:43:24.349 IO depths : 1=5.4%, 2=11.6%, 4=24.8%, 8=51.1%, 16=7.1%, 32=0.0%, >=64=0.0% 00:43:24.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.349 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.349 issued rwts: total=5510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.349 filename0: (groupid=0, jobs=1): err= 0: pid=434871: Sat Dec 14 03:24:37 2024 00:43:24.349 read: IOPS=554, BW=2217KiB/s (2271kB/s)(21.7MiB/10005msec) 00:43:24.349 slat (usec): min=7, max=142, avg=50.56, stdev=20.36 00:43:24.349 clat (usec): min=15450, max=63017, avg=28424.06, stdev=2161.08 00:43:24.349 lat (usec): min=15510, max=63034, avg=28474.61, stdev=2158.81 00:43:24.349 clat percentiles (usec): 00:43:24.349 | 1.00th=[20579], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:43:24.349 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:43:24.349 | 70.00th=[28443], 80.00th=[28705], 90.00th=[30278], 95.00th=[30540], 00:43:24.349 | 99.00th=[35914], 99.50th=[37487], 99.90th=[43779], 99.95th=[43779], 00:43:24.349 | 99.99th=[63177] 00:43:24.349 bw ( KiB/s): min= 1971, max= 2304, per=4.17%, avg=2206.26, stdev=90.95, samples=19 00:43:24.349 iops : min= 492, max= 576, avg=551.37, stdev=22.74, samples=19 00:43:24.349 lat (msec) : 20=0.88%, 50=99.08%, 100=0.04% 00:43:24.349 cpu : usr=98.64%, sys=0.91%, ctx=39, majf=0, minf=20 00:43:24.349 IO depths : 1=5.4%, 2=10.9%, 4=22.3%, 8=53.9%, 16=7.5%, 32=0.0%, >=64=0.0% 00:43:24.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.349 complete : 0=0.0%, 4=93.4%, 8=1.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.349 issued rwts: total=5546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.349 filename0: (groupid=0, jobs=1): err= 0: pid=434872: Sat Dec 14 03:24:37 2024 00:43:24.349 read: IOPS=551, BW=2206KiB/s (2258kB/s)(21.5MiB/10004msec) 00:43:24.349 slat (usec): min=7, max=124, avg=33.43, stdev=23.79 00:43:24.349 clat (usec): min=15347, max=55738, avg=28671.17, stdev=2054.21 00:43:24.349 lat (usec): min=15387, max=55755, avg=28704.61, stdev=2051.08 00:43:24.349 clat percentiles (usec): 00:43:24.349 | 1.00th=[21103], 5.00th=[27657], 10.00th=[27919], 20.00th=[28181], 00:43:24.349 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:43:24.349 | 70.00th=[28443], 80.00th=[29754], 90.00th=[30278], 95.00th=[30540], 00:43:24.349 | 99.00th=[31589], 99.50th=[34341], 99.90th=[55837], 99.95th=[55837], 00:43:24.349 | 99.99th=[55837] 00:43:24.349 bw ( KiB/s): min= 1920, max= 2304, per=4.16%, avg=2199.95, stdev=107.72, samples=19 00:43:24.349 iops : min= 480, max= 576, avg=549.79, stdev=26.89, samples=19 00:43:24.349 lat (msec) : 20=0.62%, 50=99.09%, 100=0.29% 00:43:24.349 cpu : usr=98.64%, sys=0.99%, ctx=14, majf=0, minf=32 00:43:24.349 IO depths : 1=6.0%, 2=12.1%, 4=24.4%, 8=51.0%, 16=6.6%, 32=0.0%, >=64=0.0% 00:43:24.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.349 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.349 issued rwts: total=5516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.349 filename0: (groupid=0, jobs=1): err= 0: pid=434873: Sat Dec 14 03:24:37 2024 00:43:24.349 read: IOPS=559, BW=2236KiB/s (2290kB/s)(21.9MiB/10017msec) 00:43:24.349 slat (usec): min=7, max=117, avg=34.39, stdev=23.72 00:43:24.349 clat (usec): min=3879, max=33721, avg=28351.25, stdev=2966.30 00:43:24.349 lat (usec): min=3891, max=33735, avg=28385.64, stdev=2964.99 00:43:24.349 clat percentiles (usec): 00:43:24.349 | 1.00th=[10945], 5.00th=[27657], 10.00th=[27919], 20.00th=[28181], 00:43:24.349 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:43:24.349 | 70.00th=[28705], 80.00th=[29492], 90.00th=[30278], 95.00th=[30540], 00:43:24.349 | 99.00th=[31065], 99.50th=[32113], 99.90th=[33817], 99.95th=[33817], 00:43:24.349 | 99.99th=[33817] 00:43:24.349 bw ( KiB/s): min= 2048, max= 2688, per=4.22%, avg=2232.35, stdev=134.24, samples=20 00:43:24.349 iops : min= 512, max= 672, avg=557.90, stdev=33.54, samples=20 00:43:24.349 lat (msec) : 4=0.20%, 10=0.66%, 20=1.14%, 50=98.00% 00:43:24.349 cpu : usr=98.66%, sys=0.97%, ctx=12, majf=0, minf=27 00:43:24.349 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:24.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.349 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.349 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.349 filename0: (groupid=0, jobs=1): err= 0: pid=434874: Sat Dec 14 03:24:37 2024 00:43:24.349 read: IOPS=552, BW=2212KiB/s (2265kB/s)(21.6MiB/10013msec) 00:43:24.349 slat (usec): min=7, max=105, avg=39.23, stdev=17.59 00:43:24.349 clat (usec): min=13239, max=33872, avg=28630.83, stdev=1360.73 00:43:24.349 lat (usec): min=13252, max=33888, avg=28670.06, stdev=1355.83 00:43:24.349 clat percentiles (usec): 00:43:24.349 | 1.00th=[26870], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:43:24.349 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:43:24.349 | 70.00th=[28705], 80.00th=[29754], 90.00th=[30278], 95.00th=[30540], 00:43:24.349 | 99.00th=[31065], 99.50th=[31851], 99.90th=[33817], 99.95th=[33817], 00:43:24.349 | 99.99th=[33817] 00:43:24.349 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2208.37, stdev=82.51, samples=19 00:43:24.349 iops : min= 512, max= 576, avg=551.89, stdev=20.47, samples=19 00:43:24.349 lat (msec) : 20=0.29%, 50=99.71% 00:43:24.349 cpu : usr=97.35%, sys=1.65%, ctx=215, majf=0, minf=34 00:43:24.349 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:43:24.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.349 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.349 issued rwts: total=5536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.349 filename0: (groupid=0, jobs=1): err= 0: pid=434875: Sat Dec 14 03:24:37 2024 00:43:24.349 read: IOPS=551, BW=2206KiB/s (2259kB/s)(21.6MiB/10007msec) 00:43:24.349 slat (nsec): min=5971, max=99340, avg=42422.06, stdev=16810.70 00:43:24.349 clat (usec): min=17498, max=47125, avg=28665.64, stdev=1380.78 00:43:24.349 lat (usec): min=17513, max=47139, avg=28708.06, stdev=1374.92 00:43:24.349 clat percentiles (usec): 00:43:24.349 | 1.00th=[27395], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:43:24.349 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:43:24.349 | 70.00th=[28705], 80.00th=[29754], 90.00th=[30278], 95.00th=[30540], 00:43:24.350 | 99.00th=[31589], 99.50th=[33817], 99.90th=[38536], 99.95th=[38536], 00:43:24.350 | 99.99th=[46924] 00:43:24.350 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2195.11, stdev=86.86, samples=19 00:43:24.350 iops : min= 512, max= 576, avg=548.58, stdev=21.59, samples=19 00:43:24.350 lat (msec) : 20=0.29%, 50=99.71% 00:43:24.350 cpu : usr=98.35%, sys=1.17%, ctx=106, majf=0, minf=32 00:43:24.350 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:43:24.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.350 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.350 issued rwts: total=5520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.350 filename1: (groupid=0, jobs=1): err= 0: pid=434876: Sat Dec 14 03:24:37 2024 00:43:24.350 read: IOPS=550, BW=2201KiB/s (2254kB/s)(21.5MiB/10003msec) 00:43:24.350 slat (usec): min=5, max=128, avg=38.55, stdev=26.24 00:43:24.350 clat (usec): min=15365, max=55483, avg=28753.53, stdev=2049.27 00:43:24.350 lat (usec): min=15427, max=55501, avg=28792.08, stdev=2042.10 00:43:24.350 clat percentiles (usec): 00:43:24.350 | 1.00th=[24511], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:43:24.350 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:43:24.350 | 70.00th=[28705], 80.00th=[29754], 90.00th=[30278], 95.00th=[30540], 00:43:24.350 | 99.00th=[34866], 99.50th=[36439], 99.90th=[55313], 99.95th=[55313], 00:43:24.350 | 99.99th=[55313] 00:43:24.350 bw ( KiB/s): min= 1920, max= 2304, per=4.15%, avg=2194.89, stdev=106.40, samples=19 00:43:24.350 iops : min= 480, max= 576, avg=548.53, stdev=26.55, samples=19 00:43:24.350 lat (msec) : 20=0.29%, 50=99.42%, 100=0.29% 00:43:24.350 cpu : usr=98.79%, sys=0.83%, ctx=13, majf=0, minf=24 00:43:24.350 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:43:24.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.350 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.350 issued rwts: total=5504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.350 filename1: (groupid=0, jobs=1): err= 0: pid=434877: Sat Dec 14 03:24:37 2024 00:43:24.350 read: IOPS=550, BW=2203KiB/s (2256kB/s)(21.5MiB/10005msec) 00:43:24.350 slat (usec): min=7, max=124, avg=35.78, stdev=23.96 00:43:24.350 clat (usec): min=15271, max=49280, avg=28696.77, stdev=1797.59 00:43:24.350 lat (usec): min=15284, max=49306, avg=28732.55, stdev=1792.96 00:43:24.350 clat percentiles (usec): 00:43:24.350 | 1.00th=[25560], 5.00th=[27657], 10.00th=[27919], 20.00th=[28181], 00:43:24.350 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:43:24.350 | 70.00th=[28443], 80.00th=[29754], 90.00th=[30278], 95.00th=[30540], 00:43:24.350 | 99.00th=[33817], 99.50th=[36963], 99.90th=[49021], 99.95th=[49021], 00:43:24.350 | 99.99th=[49021] 00:43:24.350 bw ( KiB/s): min= 1971, max= 2304, per=4.14%, avg=2191.11, stdev=96.49, samples=19 00:43:24.350 iops : min= 492, max= 576, avg=547.58, stdev=24.20, samples=19 00:43:24.350 lat (msec) : 20=0.29%, 50=99.71% 00:43:24.350 cpu : usr=98.79%, sys=0.84%, ctx=15, majf=0, minf=35 00:43:24.350 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:43:24.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.350 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.350 issued rwts: total=5510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.350 filename1: (groupid=0, jobs=1): err= 0: pid=434878: Sat Dec 14 03:24:37 2024 00:43:24.350 read: IOPS=559, BW=2236KiB/s (2290kB/s)(21.9MiB/10017msec) 00:43:24.350 slat (usec): min=7, max=122, avg=43.77, stdev=23.94 00:43:24.350 clat (usec): min=3797, max=40101, avg=28267.34, stdev=2982.78 00:43:24.350 lat (usec): min=3814, max=40111, avg=28311.11, stdev=2981.15 00:43:24.350 clat percentiles (usec): 00:43:24.350 | 1.00th=[11207], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:43:24.350 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:43:24.350 | 70.00th=[28705], 80.00th=[29492], 90.00th=[30278], 95.00th=[30540], 00:43:24.350 | 99.00th=[31065], 99.50th=[32113], 99.90th=[33817], 99.95th=[33817], 00:43:24.350 | 99.99th=[40109] 00:43:24.350 bw ( KiB/s): min= 2048, max= 2688, per=4.22%, avg=2232.35, stdev=134.24, samples=20 00:43:24.350 iops : min= 512, max= 672, avg=557.90, stdev=33.54, samples=20 00:43:24.350 lat (msec) : 4=0.29%, 10=0.64%, 20=1.07%, 50=98.00% 00:43:24.350 cpu : usr=98.56%, sys=1.07%, ctx=16, majf=0, minf=23 00:43:24.350 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:43:24.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.350 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.350 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.350 filename1: (groupid=0, jobs=1): err= 0: pid=434879: Sat Dec 14 03:24:37 2024 00:43:24.350 read: IOPS=556, BW=2228KiB/s (2281kB/s)(21.8MiB/10004msec) 00:43:24.350 slat (usec): min=4, max=117, avg=33.28, stdev=24.70 00:43:24.350 clat (usec): min=10160, max=36570, avg=28434.48, stdev=2231.39 00:43:24.350 lat (usec): min=10168, max=36586, avg=28467.76, stdev=2229.72 00:43:24.350 clat percentiles (usec): 00:43:24.350 | 1.00th=[16909], 5.00th=[27657], 10.00th=[27919], 20.00th=[28181], 00:43:24.350 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:43:24.350 | 70.00th=[28705], 80.00th=[29230], 90.00th=[30278], 95.00th=[30540], 00:43:24.350 | 99.00th=[31851], 99.50th=[34341], 99.90th=[36439], 99.95th=[36439], 00:43:24.350 | 99.99th=[36439] 00:43:24.350 bw ( KiB/s): min= 2048, max= 2480, per=4.21%, avg=2223.89, stdev=102.26, samples=19 00:43:24.350 iops : min= 512, max= 620, avg=555.89, stdev=25.50, samples=19 00:43:24.350 lat (msec) : 20=1.90%, 50=98.10% 00:43:24.350 cpu : usr=98.75%, sys=0.88%, ctx=12, majf=0, minf=31 00:43:24.350 IO depths : 1=5.9%, 2=12.0%, 4=24.5%, 8=51.0%, 16=6.6%, 32=0.0%, >=64=0.0% 00:43:24.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.350 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.350 issued rwts: total=5571,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.350 filename1: (groupid=0, jobs=1): err= 0: pid=434880: Sat Dec 14 03:24:37 2024 00:43:24.350 read: IOPS=554, BW=2220KiB/s (2273kB/s)(21.7MiB/10004msec) 00:43:24.350 slat (nsec): min=7641, max=86567, avg=22896.31, stdev=12876.73 00:43:24.350 clat (usec): min=9017, max=35393, avg=28613.08, stdev=1931.53 00:43:24.350 lat (usec): min=9031, max=35422, avg=28635.98, stdev=1931.36 00:43:24.350 clat percentiles (usec): 00:43:24.350 | 1.00th=[17695], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:43:24.350 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:43:24.350 | 70.00th=[28705], 80.00th=[29754], 90.00th=[30278], 95.00th=[30540], 00:43:24.350 | 99.00th=[31065], 99.50th=[31065], 99.90th=[35390], 99.95th=[35390], 00:43:24.350 | 99.99th=[35390] 00:43:24.350 bw ( KiB/s): min= 2048, max= 2304, per=4.19%, avg=2215.89, stdev=85.75, samples=19 00:43:24.350 iops : min= 512, max= 576, avg=553.89, stdev=21.42, samples=19 00:43:24.350 lat (msec) : 10=0.29%, 20=0.86%, 50=98.85% 00:43:24.350 cpu : usr=98.74%, sys=0.89%, ctx=20, majf=0, minf=44 00:43:24.350 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:24.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.350 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.350 issued rwts: total=5552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.350 filename1: (groupid=0, jobs=1): err= 0: pid=434881: Sat Dec 14 03:24:37 2024 00:43:24.350 read: IOPS=554, BW=2220KiB/s (2273kB/s)(21.7MiB/10004msec) 00:43:24.350 slat (usec): min=7, max=117, avg=29.89, stdev=21.33 00:43:24.350 clat (usec): min=8957, max=35561, avg=28551.20, stdev=1947.42 00:43:24.350 lat (usec): min=8965, max=35575, avg=28581.09, stdev=1945.58 00:43:24.350 clat percentiles (usec): 00:43:24.350 | 1.00th=[17695], 5.00th=[27657], 10.00th=[27919], 20.00th=[28181], 00:43:24.350 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:43:24.350 | 70.00th=[28443], 80.00th=[29754], 90.00th=[30278], 95.00th=[30540], 00:43:24.350 | 99.00th=[31065], 99.50th=[31065], 99.90th=[35390], 99.95th=[35390], 00:43:24.350 | 99.99th=[35390] 00:43:24.350 bw ( KiB/s): min= 2048, max= 2304, per=4.19%, avg=2215.89, stdev=85.75, samples=19 00:43:24.350 iops : min= 512, max= 576, avg=553.89, stdev=21.42, samples=19 00:43:24.350 lat (msec) : 10=0.29%, 20=0.86%, 50=98.85% 00:43:24.350 cpu : usr=98.63%, sys=0.99%, ctx=11, majf=0, minf=24 00:43:24.350 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:24.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.350 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.350 issued rwts: total=5552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.350 filename1: (groupid=0, jobs=1): err= 0: pid=434882: Sat Dec 14 03:24:37 2024 00:43:24.350 read: IOPS=549, BW=2196KiB/s (2249kB/s)(21.6MiB/10056msec) 00:43:24.350 slat (usec): min=6, max=127, avg=50.95, stdev=23.34 00:43:24.350 clat (usec): min=19787, max=55159, avg=28524.87, stdev=1292.71 00:43:24.350 lat (usec): min=19803, max=55188, avg=28575.82, stdev=1285.87 00:43:24.350 clat percentiles (usec): 00:43:24.350 | 1.00th=[27132], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:43:24.350 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:43:24.350 | 70.00th=[28443], 80.00th=[29754], 90.00th=[30278], 95.00th=[30540], 00:43:24.350 | 99.00th=[31065], 99.50th=[32113], 99.90th=[36439], 99.95th=[40109], 00:43:24.350 | 99.99th=[55313] 00:43:24.350 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2203.80, stdev=89.44, samples=20 00:43:24.350 iops : min= 512, max= 576, avg=550.75, stdev=22.37, samples=20 00:43:24.350 lat (msec) : 20=0.04%, 50=99.93%, 100=0.04% 00:43:24.350 cpu : usr=98.73%, sys=0.89%, ctx=14, majf=0, minf=25 00:43:24.350 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:24.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.350 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.350 issued rwts: total=5522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.350 filename1: (groupid=0, jobs=1): err= 0: pid=434883: Sat Dec 14 03:24:37 2024 00:43:24.350 read: IOPS=552, BW=2210KiB/s (2263kB/s)(21.6MiB/10003msec) 00:43:24.350 slat (usec): min=6, max=125, avg=33.07, stdev=23.59 00:43:24.350 clat (usec): min=10768, max=68266, avg=28620.14, stdev=2249.38 00:43:24.350 lat (usec): min=10781, max=68283, avg=28653.21, stdev=2246.79 00:43:24.350 clat percentiles (usec): 00:43:24.351 | 1.00th=[20317], 5.00th=[27657], 10.00th=[27919], 20.00th=[28181], 00:43:24.351 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:43:24.351 | 70.00th=[28443], 80.00th=[29754], 90.00th=[30278], 95.00th=[30540], 00:43:24.351 | 99.00th=[31589], 99.50th=[34341], 99.90th=[55313], 99.95th=[55313], 00:43:24.351 | 99.99th=[68682] 00:43:24.351 bw ( KiB/s): min= 1920, max= 2352, per=4.17%, avg=2204.16, stdev=112.17, samples=19 00:43:24.351 iops : min= 480, max= 588, avg=550.84, stdev=28.01, samples=19 00:43:24.351 lat (msec) : 20=0.98%, 50=98.73%, 100=0.29% 00:43:24.351 cpu : usr=98.61%, sys=1.00%, ctx=13, majf=0, minf=28 00:43:24.351 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.9%, 16=6.6%, 32=0.0%, >=64=0.0% 00:43:24.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.351 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.351 issued rwts: total=5526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.351 filename2: (groupid=0, jobs=1): err= 0: pid=434884: Sat Dec 14 03:24:37 2024 00:43:24.351 read: IOPS=553, BW=2212KiB/s (2266kB/s)(21.6MiB/10009msec) 00:43:24.351 slat (usec): min=6, max=122, avg=29.55, stdev=21.94 00:43:24.351 clat (usec): min=12268, max=44793, avg=28646.67, stdev=1521.62 00:43:24.351 lat (usec): min=12280, max=44819, avg=28676.23, stdev=1517.82 00:43:24.351 clat percentiles (usec): 00:43:24.351 | 1.00th=[21890], 5.00th=[27657], 10.00th=[27919], 20.00th=[28181], 00:43:24.351 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:43:24.351 | 70.00th=[28443], 80.00th=[29754], 90.00th=[30278], 95.00th=[30540], 00:43:24.351 | 99.00th=[31065], 99.50th=[31065], 99.90th=[43779], 99.95th=[44303], 00:43:24.351 | 99.99th=[44827] 00:43:24.351 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2208.37, stdev=82.51, samples=19 00:43:24.351 iops : min= 512, max= 576, avg=551.89, stdev=20.47, samples=19 00:43:24.351 lat (msec) : 20=0.43%, 50=99.57% 00:43:24.351 cpu : usr=98.69%, sys=0.93%, ctx=14, majf=0, minf=24 00:43:24.351 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:43:24.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.351 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.351 issued rwts: total=5536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.351 filename2: (groupid=0, jobs=1): err= 0: pid=434885: Sat Dec 14 03:24:37 2024 00:43:24.351 read: IOPS=551, BW=2207KiB/s (2260kB/s)(21.6MiB/10006msec) 00:43:24.351 slat (usec): min=5, max=128, avg=49.19, stdev=24.30 00:43:24.351 clat (usec): min=15384, max=43697, avg=28531.25, stdev=1340.30 00:43:24.351 lat (usec): min=15433, max=43713, avg=28580.43, stdev=1332.84 00:43:24.351 clat percentiles (usec): 00:43:24.351 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:43:24.351 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:43:24.351 | 70.00th=[28443], 80.00th=[29754], 90.00th=[30278], 95.00th=[30540], 00:43:24.351 | 99.00th=[31327], 99.50th=[33817], 99.90th=[35914], 99.95th=[35914], 00:43:24.351 | 99.99th=[43779] 00:43:24.351 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2195.16, stdev=87.16, samples=19 00:43:24.351 iops : min= 512, max= 576, avg=548.63, stdev=21.65, samples=19 00:43:24.351 lat (msec) : 20=0.29%, 50=99.71% 00:43:24.351 cpu : usr=98.85%, sys=0.77%, ctx=13, majf=0, minf=31 00:43:24.351 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:24.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.351 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.351 issued rwts: total=5520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.351 filename2: (groupid=0, jobs=1): err= 0: pid=434886: Sat Dec 14 03:24:37 2024 00:43:24.351 read: IOPS=551, BW=2206KiB/s (2259kB/s)(21.6MiB/10008msec) 00:43:24.351 slat (usec): min=4, max=118, avg=50.97, stdev=23.25 00:43:24.351 clat (usec): min=15288, max=38368, avg=28545.56, stdev=1360.40 00:43:24.351 lat (usec): min=15368, max=38381, avg=28596.53, stdev=1351.57 00:43:24.351 clat percentiles (usec): 00:43:24.351 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:43:24.351 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:43:24.351 | 70.00th=[28443], 80.00th=[29754], 90.00th=[30278], 95.00th=[30540], 00:43:24.351 | 99.00th=[31327], 99.50th=[33817], 99.90th=[38536], 99.95th=[38536], 00:43:24.351 | 99.99th=[38536] 00:43:24.351 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2195.11, stdev=86.86, samples=19 00:43:24.351 iops : min= 512, max= 576, avg=548.58, stdev=21.59, samples=19 00:43:24.351 lat (msec) : 20=0.29%, 50=99.71% 00:43:24.351 cpu : usr=98.86%, sys=0.76%, ctx=16, majf=0, minf=31 00:43:24.351 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:24.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.351 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.351 issued rwts: total=5520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.351 filename2: (groupid=0, jobs=1): err= 0: pid=434887: Sat Dec 14 03:24:37 2024 00:43:24.351 read: IOPS=553, BW=2213KiB/s (2266kB/s)(21.6MiB/10005msec) 00:43:24.351 slat (usec): min=6, max=121, avg=27.60, stdev=21.22 00:43:24.351 clat (usec): min=15361, max=46928, avg=28641.62, stdev=2021.05 00:43:24.351 lat (usec): min=15375, max=46942, avg=28669.22, stdev=2017.77 00:43:24.351 clat percentiles (usec): 00:43:24.351 | 1.00th=[20841], 5.00th=[27657], 10.00th=[27919], 20.00th=[28181], 00:43:24.351 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:43:24.351 | 70.00th=[28443], 80.00th=[28705], 90.00th=[30278], 95.00th=[30540], 00:43:24.351 | 99.00th=[35390], 99.50th=[38536], 99.90th=[46924], 99.95th=[46924], 00:43:24.351 | 99.99th=[46924] 00:43:24.351 bw ( KiB/s): min= 1923, max= 2304, per=4.17%, avg=2202.05, stdev=90.59, samples=19 00:43:24.351 iops : min= 480, max= 576, avg=550.32, stdev=22.66, samples=19 00:43:24.351 lat (msec) : 20=0.58%, 50=99.42% 00:43:24.351 cpu : usr=98.73%, sys=0.89%, ctx=13, majf=0, minf=35 00:43:24.351 IO depths : 1=5.7%, 2=11.5%, 4=23.3%, 8=52.5%, 16=7.1%, 32=0.0%, >=64=0.0% 00:43:24.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.351 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.351 issued rwts: total=5536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.351 filename2: (groupid=0, jobs=1): err= 0: pid=434888: Sat Dec 14 03:24:37 2024 00:43:24.351 read: IOPS=554, BW=2220KiB/s (2273kB/s)(21.7MiB/10004msec) 00:43:24.351 slat (usec): min=7, max=125, avg=50.21, stdev=21.73 00:43:24.351 clat (usec): min=10966, max=33889, avg=28395.10, stdev=1985.47 00:43:24.351 lat (usec): min=10982, max=33902, avg=28445.31, stdev=1982.89 00:43:24.351 clat percentiles (usec): 00:43:24.351 | 1.00th=[14353], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:43:24.351 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:43:24.351 | 70.00th=[28443], 80.00th=[29492], 90.00th=[30278], 95.00th=[30278], 00:43:24.351 | 99.00th=[31065], 99.50th=[31851], 99.90th=[33817], 99.95th=[33817], 00:43:24.351 | 99.99th=[33817] 00:43:24.351 bw ( KiB/s): min= 2048, max= 2432, per=4.19%, avg=2215.89, stdev=95.78, samples=19 00:43:24.351 iops : min= 512, max= 608, avg=553.89, stdev=23.93, samples=19 00:43:24.351 lat (msec) : 20=1.15%, 50=98.85% 00:43:24.351 cpu : usr=98.81%, sys=0.82%, ctx=12, majf=0, minf=25 00:43:24.351 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:24.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.351 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.351 issued rwts: total=5552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.351 filename2: (groupid=0, jobs=1): err= 0: pid=434889: Sat Dec 14 03:24:37 2024 00:43:24.351 read: IOPS=550, BW=2201KiB/s (2254kB/s)(21.5MiB/10003msec) 00:43:24.351 slat (nsec): min=6178, max=90585, avg=31997.52, stdev=17156.18 00:43:24.351 clat (usec): min=15425, max=55328, avg=28797.18, stdev=1976.17 00:43:24.351 lat (usec): min=15460, max=55345, avg=28829.18, stdev=1971.73 00:43:24.351 clat percentiles (usec): 00:43:24.351 | 1.00th=[25297], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:43:24.351 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:43:24.351 | 70.00th=[28705], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:43:24.351 | 99.00th=[33817], 99.50th=[34866], 99.90th=[55313], 99.95th=[55313], 00:43:24.351 | 99.99th=[55313] 00:43:24.351 bw ( KiB/s): min= 1920, max= 2304, per=4.15%, avg=2194.05, stdev=104.94, samples=19 00:43:24.351 iops : min= 480, max= 576, avg=548.32, stdev=26.23, samples=19 00:43:24.351 lat (msec) : 20=0.29%, 50=99.42%, 100=0.29% 00:43:24.351 cpu : usr=97.79%, sys=1.56%, ctx=228, majf=0, minf=38 00:43:24.351 IO depths : 1=5.3%, 2=11.5%, 4=24.9%, 8=51.2%, 16=7.2%, 32=0.0%, >=64=0.0% 00:43:24.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.351 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.351 issued rwts: total=5504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.351 filename2: (groupid=0, jobs=1): err= 0: pid=434890: Sat Dec 14 03:24:37 2024 00:43:24.351 read: IOPS=552, BW=2210KiB/s (2263kB/s)(21.6MiB/10002msec) 00:43:24.351 slat (usec): min=6, max=100, avg=44.92, stdev=17.46 00:43:24.351 clat (usec): min=14338, max=55365, avg=28580.75, stdev=2128.70 00:43:24.351 lat (usec): min=14347, max=55383, avg=28625.67, stdev=2125.33 00:43:24.351 clat percentiles (usec): 00:43:24.351 | 1.00th=[21103], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:43:24.351 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:43:24.351 | 70.00th=[28443], 80.00th=[29230], 90.00th=[30278], 95.00th=[30540], 00:43:24.351 | 99.00th=[31065], 99.50th=[33817], 99.90th=[55313], 99.95th=[55313], 00:43:24.351 | 99.99th=[55313] 00:43:24.351 bw ( KiB/s): min= 1920, max= 2347, per=4.17%, avg=2204.16, stdev=111.41, samples=19 00:43:24.351 iops : min= 480, max= 586, avg=550.84, stdev=27.71, samples=19 00:43:24.351 lat (msec) : 20=0.85%, 50=98.86%, 100=0.29% 00:43:24.351 cpu : usr=98.37%, sys=1.18%, ctx=71, majf=0, minf=35 00:43:24.351 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:43:24.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.351 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.351 issued rwts: total=5526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.351 filename2: (groupid=0, jobs=1): err= 0: pid=434891: Sat Dec 14 03:24:37 2024 00:43:24.351 read: IOPS=549, BW=2198KiB/s (2251kB/s)(21.5MiB/10002msec) 00:43:24.351 slat (usec): min=5, max=104, avg=48.77, stdev=17.06 00:43:24.351 clat (usec): min=15375, max=79163, avg=28689.63, stdev=2599.55 00:43:24.351 lat (usec): min=15410, max=79208, avg=28738.40, stdev=2598.66 00:43:24.352 clat percentiles (usec): 00:43:24.352 | 1.00th=[24249], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:43:24.352 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:43:24.352 | 70.00th=[28443], 80.00th=[29230], 90.00th=[30016], 95.00th=[30278], 00:43:24.352 | 99.00th=[36963], 99.50th=[40109], 99.90th=[63177], 99.95th=[63177], 00:43:24.352 | 99.99th=[79168] 00:43:24.352 bw ( KiB/s): min= 1891, max= 2304, per=4.14%, avg=2191.68, stdev=112.60, samples=19 00:43:24.352 iops : min= 472, max= 576, avg=547.68, stdev=28.11, samples=19 00:43:24.352 lat (msec) : 20=0.35%, 50=99.36%, 100=0.29% 00:43:24.352 cpu : usr=98.50%, sys=0.95%, ctx=52, majf=0, minf=30 00:43:24.352 IO depths : 1=5.6%, 2=11.7%, 4=24.4%, 8=51.3%, 16=6.9%, 32=0.0%, >=64=0.0% 00:43:24.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.352 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.352 issued rwts: total=5496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.352 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.352 00:43:24.352 Run status group 0 (all jobs): 00:43:24.352 READ: bw=51.6MiB/s (54.1MB/s), 2196KiB/s-2242KiB/s (2249kB/s-2295kB/s), io=519MiB (544MB), run=10002-10056msec 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.352 bdev_null0 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.352 [2024-12-14 03:24:38.244553] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.352 bdev_null1 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:24.352 { 00:43:24.352 "params": { 00:43:24.352 "name": "Nvme$subsystem", 00:43:24.352 "trtype": "$TEST_TRANSPORT", 00:43:24.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:24.352 "adrfam": "ipv4", 00:43:24.352 "trsvcid": "$NVMF_PORT", 00:43:24.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:24.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:24.352 "hdgst": ${hdgst:-false}, 00:43:24.352 "ddgst": ${ddgst:-false} 00:43:24.352 }, 00:43:24.352 "method": "bdev_nvme_attach_controller" 00:43:24.352 } 00:43:24.352 EOF 00:43:24.352 )") 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:24.352 03:24:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:24.353 { 00:43:24.353 "params": { 00:43:24.353 "name": "Nvme$subsystem", 00:43:24.353 "trtype": "$TEST_TRANSPORT", 00:43:24.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:24.353 "adrfam": "ipv4", 00:43:24.353 "trsvcid": "$NVMF_PORT", 00:43:24.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:24.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:24.353 "hdgst": ${hdgst:-false}, 00:43:24.353 "ddgst": ${ddgst:-false} 00:43:24.353 }, 00:43:24.353 "method": "bdev_nvme_attach_controller" 00:43:24.353 } 00:43:24.353 EOF 00:43:24.353 )") 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:24.353 "params": { 00:43:24.353 "name": "Nvme0", 00:43:24.353 "trtype": "tcp", 00:43:24.353 "traddr": "10.0.0.2", 00:43:24.353 "adrfam": "ipv4", 00:43:24.353 "trsvcid": "4420", 00:43:24.353 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:24.353 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:24.353 "hdgst": false, 00:43:24.353 "ddgst": false 00:43:24.353 }, 00:43:24.353 "method": "bdev_nvme_attach_controller" 00:43:24.353 },{ 00:43:24.353 "params": { 00:43:24.353 "name": "Nvme1", 00:43:24.353 "trtype": "tcp", 00:43:24.353 "traddr": "10.0.0.2", 00:43:24.353 "adrfam": "ipv4", 00:43:24.353 "trsvcid": "4420", 00:43:24.353 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:24.353 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:24.353 "hdgst": false, 00:43:24.353 "ddgst": false 00:43:24.353 }, 00:43:24.353 "method": "bdev_nvme_attach_controller" 00:43:24.353 }' 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:24.353 03:24:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:24.353 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:24.353 ... 00:43:24.353 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:24.353 ... 00:43:24.353 fio-3.35 00:43:24.353 Starting 4 threads 00:43:29.614 00:43:29.614 filename0: (groupid=0, jobs=1): err= 0: pid=435146: Sat Dec 14 03:24:44 2024 00:43:29.614 read: IOPS=2583, BW=20.2MiB/s (21.2MB/s)(101MiB/5002msec) 00:43:29.614 slat (nsec): min=6173, max=38754, avg=9375.24, stdev=3534.61 00:43:29.614 clat (usec): min=673, max=5525, avg=3068.07, stdev=438.42 00:43:29.614 lat (usec): min=684, max=5547, avg=3077.45, stdev=438.27 00:43:29.614 clat percentiles (usec): 00:43:29.614 | 1.00th=[ 2114], 5.00th=[ 2474], 10.00th=[ 2671], 20.00th=[ 2868], 00:43:29.614 | 30.00th=[ 2966], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:43:29.614 | 70.00th=[ 3097], 80.00th=[ 3261], 90.00th=[ 3589], 95.00th=[ 3785], 00:43:29.614 | 99.00th=[ 4752], 99.50th=[ 5080], 99.90th=[ 5342], 99.95th=[ 5473], 00:43:29.614 | 99.99th=[ 5538] 00:43:29.614 bw ( KiB/s): min=19728, max=21168, per=24.59%, avg=20680.89, stdev=476.19, samples=9 00:43:29.614 iops : min= 2466, max= 2646, avg=2585.11, stdev=59.52, samples=9 00:43:29.614 lat (usec) : 750=0.02%, 1000=0.02% 00:43:29.614 lat (msec) : 2=0.63%, 4=95.47%, 10=3.88% 00:43:29.614 cpu : usr=95.96%, sys=3.76%, ctx=8, majf=0, minf=9 00:43:29.614 IO depths : 1=0.3%, 2=4.4%, 4=67.2%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:29.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.614 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.614 issued rwts: total=12923,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:29.614 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:29.614 filename0: (groupid=0, jobs=1): err= 0: pid=435147: Sat Dec 14 03:24:44 2024 00:43:29.614 read: IOPS=2863, BW=22.4MiB/s (23.5MB/s)(112MiB/5003msec) 00:43:29.614 slat (nsec): min=6170, max=32007, avg=9214.17, stdev=3359.91 00:43:29.614 clat (usec): min=560, max=5737, avg=2761.89, stdev=425.05 00:43:29.614 lat (usec): min=572, max=5748, avg=2771.10, stdev=424.97 00:43:29.614 clat percentiles (usec): 00:43:29.614 | 1.00th=[ 1582], 5.00th=[ 2073], 10.00th=[ 2245], 20.00th=[ 2442], 00:43:29.614 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2835], 60.00th=[ 2966], 00:43:29.614 | 70.00th=[ 2999], 80.00th=[ 3032], 90.00th=[ 3130], 95.00th=[ 3359], 00:43:29.614 | 99.00th=[ 3982], 99.50th=[ 4080], 99.90th=[ 4686], 99.95th=[ 4948], 00:43:29.614 | 99.99th=[ 5276] 00:43:29.614 bw ( KiB/s): min=21424, max=24512, per=27.25%, avg=22914.00, stdev=905.50, samples=9 00:43:29.614 iops : min= 2678, max= 3064, avg=2864.22, stdev=113.19, samples=9 00:43:29.614 lat (usec) : 750=0.01%, 1000=0.06% 00:43:29.614 lat (msec) : 2=3.39%, 4=95.64%, 10=0.90% 00:43:29.614 cpu : usr=95.70%, sys=4.00%, ctx=7, majf=0, minf=0 00:43:29.614 IO depths : 1=1.0%, 2=11.8%, 4=61.1%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:29.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.614 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.614 issued rwts: total=14327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:29.614 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:29.614 filename1: (groupid=0, jobs=1): err= 0: pid=435148: Sat Dec 14 03:24:44 2024 00:43:29.614 read: IOPS=2600, BW=20.3MiB/s (21.3MB/s)(102MiB/5042msec) 00:43:29.614 slat (nsec): min=6194, max=37003, avg=9368.72, stdev=3585.29 00:43:29.614 clat (usec): min=437, max=43150, avg=3037.89, stdev=855.41 00:43:29.614 lat (usec): min=449, max=43161, avg=3047.26, stdev=855.36 00:43:29.614 clat percentiles (usec): 00:43:29.615 | 1.00th=[ 2073], 5.00th=[ 2474], 10.00th=[ 2606], 20.00th=[ 2802], 00:43:29.615 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:43:29.615 | 70.00th=[ 3097], 80.00th=[ 3228], 90.00th=[ 3458], 95.00th=[ 3687], 00:43:29.615 | 99.00th=[ 4228], 99.50th=[ 4359], 99.90th=[ 4948], 99.95th=[ 5211], 00:43:29.615 | 99.99th=[43254] 00:43:29.615 bw ( KiB/s): min=20272, max=21552, per=24.94%, avg=20969.60, stdev=413.34, samples=10 00:43:29.615 iops : min= 2534, max= 2694, avg=2621.20, stdev=51.67, samples=10 00:43:29.615 lat (usec) : 500=0.01%, 1000=0.02% 00:43:29.615 lat (msec) : 2=0.62%, 4=97.47%, 10=1.85%, 50=0.04% 00:43:29.615 cpu : usr=96.09%, sys=3.63%, ctx=7, majf=0, minf=0 00:43:29.615 IO depths : 1=0.8%, 2=4.5%, 4=68.4%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:29.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.615 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.615 issued rwts: total=13111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:29.615 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:29.615 filename1: (groupid=0, jobs=1): err= 0: pid=435149: Sat Dec 14 03:24:44 2024 00:43:29.615 read: IOPS=2526, BW=19.7MiB/s (20.7MB/s)(98.7MiB/5001msec) 00:43:29.615 slat (nsec): min=6183, max=32399, avg=9076.03, stdev=3524.98 00:43:29.615 clat (usec): min=624, max=5616, avg=3138.34, stdev=409.42 00:43:29.615 lat (usec): min=631, max=5630, avg=3147.42, stdev=409.13 00:43:29.615 clat percentiles (usec): 00:43:29.615 | 1.00th=[ 2278], 5.00th=[ 2704], 10.00th=[ 2835], 20.00th=[ 2933], 00:43:29.615 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3064], 00:43:29.615 | 70.00th=[ 3163], 80.00th=[ 3294], 90.00th=[ 3654], 95.00th=[ 3916], 00:43:29.615 | 99.00th=[ 4752], 99.50th=[ 5014], 99.90th=[ 5473], 99.95th=[ 5538], 00:43:29.615 | 99.99th=[ 5604] 00:43:29.615 bw ( KiB/s): min=19376, max=20985, per=24.05%, avg=20226.78, stdev=596.68, samples=9 00:43:29.615 iops : min= 2422, max= 2623, avg=2528.33, stdev=74.57, samples=9 00:43:29.615 lat (usec) : 750=0.01%, 1000=0.02% 00:43:29.615 lat (msec) : 2=0.29%, 4=95.80%, 10=3.88% 00:43:29.615 cpu : usr=96.04%, sys=3.66%, ctx=10, majf=0, minf=9 00:43:29.615 IO depths : 1=0.4%, 2=2.5%, 4=71.2%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:29.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.615 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.615 issued rwts: total=12636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:29.615 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:29.615 00:43:29.615 Run status group 0 (all jobs): 00:43:29.615 READ: bw=82.1MiB/s (86.1MB/s), 19.7MiB/s-22.4MiB/s (20.7MB/s-23.5MB/s), io=414MiB (434MB), run=5001-5042msec 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.615 00:43:29.615 real 0m24.444s 00:43:29.615 user 4m52.598s 00:43:29.615 sys 0m4.952s 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:29.615 03:24:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:29.615 ************************************ 00:43:29.615 END TEST fio_dif_rand_params 00:43:29.615 ************************************ 00:43:29.615 03:24:44 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:43:29.615 03:24:44 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:29.615 03:24:44 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:29.615 03:24:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:29.874 ************************************ 00:43:29.874 START TEST fio_dif_digest 00:43:29.874 ************************************ 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:29.874 bdev_null0 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:29.874 [2024-12-14 03:24:44.804306] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:29.874 { 00:43:29.874 "params": { 00:43:29.874 "name": "Nvme$subsystem", 00:43:29.874 "trtype": "$TEST_TRANSPORT", 00:43:29.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:29.874 "adrfam": "ipv4", 00:43:29.874 "trsvcid": "$NVMF_PORT", 00:43:29.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:29.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:29.874 "hdgst": ${hdgst:-false}, 00:43:29.874 "ddgst": ${ddgst:-false} 00:43:29.874 }, 00:43:29.874 "method": "bdev_nvme_attach_controller" 00:43:29.874 } 00:43:29.874 EOF 00:43:29.874 )") 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:29.874 "params": { 00:43:29.874 "name": "Nvme0", 00:43:29.874 "trtype": "tcp", 00:43:29.874 "traddr": "10.0.0.2", 00:43:29.874 "adrfam": "ipv4", 00:43:29.874 "trsvcid": "4420", 00:43:29.874 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:29.874 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:29.874 "hdgst": true, 00:43:29.874 "ddgst": true 00:43:29.874 }, 00:43:29.874 "method": "bdev_nvme_attach_controller" 00:43:29.874 }' 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:29.874 03:24:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:30.133 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:30.133 ... 00:43:30.133 fio-3.35 00:43:30.133 Starting 3 threads 00:43:42.334 00:43:42.334 filename0: (groupid=0, jobs=1): err= 0: pid=435358: Sat Dec 14 03:24:55 2024 00:43:42.334 read: IOPS=267, BW=33.5MiB/s (35.1MB/s)(336MiB/10044msec) 00:43:42.334 slat (nsec): min=6462, max=28426, avg=11576.93, stdev=1804.59 00:43:42.334 clat (usec): min=8232, max=50857, avg=11176.60, stdev=1263.37 00:43:42.334 lat (usec): min=8258, max=50869, avg=11188.18, stdev=1263.36 00:43:42.334 clat percentiles (usec): 00:43:42.334 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:43:42.334 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:43:42.334 | 70.00th=[11600], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:43:42.334 | 99.00th=[13042], 99.50th=[13173], 99.90th=[14091], 99.95th=[46400], 00:43:42.334 | 99.99th=[51119] 00:43:42.334 bw ( KiB/s): min=33280, max=35072, per=33.00%, avg=34393.60, stdev=409.22, samples=20 00:43:42.334 iops : min= 260, max= 274, avg=268.70, stdev= 3.20, samples=20 00:43:42.334 lat (msec) : 10=6.10%, 20=93.83%, 50=0.04%, 100=0.04% 00:43:42.334 cpu : usr=94.67%, sys=5.07%, ctx=21, majf=0, minf=0 00:43:42.334 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:42.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:42.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:42.334 issued rwts: total=2689,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:42.334 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:42.334 filename0: (groupid=0, jobs=1): err= 0: pid=435359: Sat Dec 14 03:24:55 2024 00:43:42.334 read: IOPS=259, BW=32.4MiB/s (34.0MB/s)(325MiB/10043msec) 00:43:42.334 slat (nsec): min=6522, max=36042, avg=11723.19, stdev=1694.41 00:43:42.334 clat (usec): min=8355, max=47522, avg=11550.45, stdev=1254.80 00:43:42.334 lat (usec): min=8367, max=47534, avg=11562.17, stdev=1254.79 00:43:42.334 clat percentiles (usec): 00:43:42.334 | 1.00th=[ 9765], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:43:42.334 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:43:42.334 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12780], 00:43:42.334 | 99.00th=[13435], 99.50th=[13829], 99.90th=[14746], 99.95th=[47449], 00:43:42.334 | 99.99th=[47449] 00:43:42.334 bw ( KiB/s): min=32768, max=33792, per=31.93%, avg=33280.00, stdev=310.77, samples=20 00:43:42.334 iops : min= 256, max= 264, avg=260.00, stdev= 2.43, samples=20 00:43:42.334 lat (msec) : 10=2.00%, 20=97.92%, 50=0.08% 00:43:42.334 cpu : usr=94.83%, sys=4.92%, ctx=20, majf=0, minf=10 00:43:42.334 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:42.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:42.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:42.334 issued rwts: total=2602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:42.334 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:42.334 filename0: (groupid=0, jobs=1): err= 0: pid=435360: Sat Dec 14 03:24:55 2024 00:43:42.334 read: IOPS=287, BW=35.9MiB/s (37.7MB/s)(361MiB/10046msec) 00:43:42.334 slat (nsec): min=6449, max=28500, avg=11743.98, stdev=1701.84 00:43:42.334 clat (usec): min=7989, max=50382, avg=10403.84, stdev=1280.13 00:43:42.334 lat (usec): min=8000, max=50392, avg=10415.58, stdev=1280.03 00:43:42.334 clat percentiles (usec): 00:43:42.334 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:43:42.334 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:43:42.334 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:43:42.334 | 99.00th=[12387], 99.50th=[12649], 99.90th=[14091], 99.95th=[47973], 00:43:42.334 | 99.99th=[50594] 00:43:42.334 bw ( KiB/s): min=35840, max=37888, per=35.44%, avg=36940.80, stdev=690.43, samples=20 00:43:42.334 iops : min= 280, max= 296, avg=288.60, stdev= 5.39, samples=20 00:43:42.335 lat (msec) : 10=30.08%, 20=69.85%, 50=0.03%, 100=0.03% 00:43:42.335 cpu : usr=94.41%, sys=5.34%, ctx=19, majf=0, minf=2 00:43:42.335 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:42.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:42.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:42.335 issued rwts: total=2889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:42.335 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:42.335 00:43:42.335 Run status group 0 (all jobs): 00:43:42.335 READ: bw=102MiB/s (107MB/s), 32.4MiB/s-35.9MiB/s (34.0MB/s-37.7MB/s), io=1023MiB (1072MB), run=10043-10046msec 00:43:42.335 03:24:55 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:43:42.335 03:24:55 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:43:42.335 03:24:55 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:43:42.335 03:24:55 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:42.335 03:24:55 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:43:42.335 03:24:55 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:42.335 03:24:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:42.335 03:24:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:42.335 03:24:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:42.335 03:24:55 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:42.335 03:24:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:42.335 03:24:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:42.335 03:24:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:42.335 00:43:42.335 real 0m11.175s 00:43:42.335 user 0m35.334s 00:43:42.335 sys 0m1.823s 00:43:42.335 03:24:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:42.335 03:24:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:42.335 ************************************ 00:43:42.335 END TEST fio_dif_digest 00:43:42.335 ************************************ 00:43:42.335 03:24:55 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:43:42.335 03:24:55 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:43:42.335 03:24:55 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:42.335 03:24:55 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:43:42.335 03:24:55 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:42.335 03:24:55 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:43:42.335 03:24:55 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:42.335 03:24:55 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:42.335 rmmod nvme_tcp 00:43:42.335 rmmod nvme_fabrics 00:43:42.335 rmmod nvme_keyring 00:43:42.335 03:24:56 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:42.335 03:24:56 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:43:42.335 03:24:56 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:43:42.335 03:24:56 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 433438 ']' 00:43:42.335 03:24:56 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 433438 00:43:42.335 03:24:56 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 433438 ']' 00:43:42.335 03:24:56 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 433438 00:43:42.335 03:24:56 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:43:42.335 03:24:56 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:42.335 03:24:56 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 433438 00:43:42.335 03:24:56 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:42.335 03:24:56 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:42.335 03:24:56 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 433438' 00:43:42.335 killing process with pid 433438 00:43:42.335 03:24:56 nvmf_dif -- common/autotest_common.sh@973 -- # kill 433438 00:43:42.335 03:24:56 nvmf_dif -- common/autotest_common.sh@978 -- # wait 433438 00:43:42.335 03:24:56 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:43:42.335 03:24:56 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:44.241 Waiting for block devices as requested 00:43:44.241 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:44.241 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:44.241 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:44.241 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:44.241 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:44.241 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:44.500 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:44.500 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:44.500 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:44.759 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:44.759 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:44.759 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:45.017 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:45.017 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:45.017 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:45.017 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:45.275 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:45.275 03:25:00 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:45.275 03:25:00 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:45.275 03:25:00 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:43:45.275 03:25:00 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:43:45.275 03:25:00 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:45.275 03:25:00 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:43:45.275 03:25:00 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:45.275 03:25:00 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:45.275 03:25:00 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:45.275 03:25:00 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:45.275 03:25:00 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:47.810 03:25:02 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:47.810 00:43:47.810 real 1m14.064s 00:43:47.810 user 7m10.307s 00:43:47.810 sys 0m20.454s 00:43:47.810 03:25:02 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:47.810 03:25:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:47.810 ************************************ 00:43:47.810 END TEST nvmf_dif 00:43:47.810 ************************************ 00:43:47.810 03:25:02 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:47.810 03:25:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:47.810 03:25:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:47.810 03:25:02 -- common/autotest_common.sh@10 -- # set +x 00:43:47.810 ************************************ 00:43:47.810 START TEST nvmf_abort_qd_sizes 00:43:47.810 ************************************ 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:47.810 * Looking for test storage... 00:43:47.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:47.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:47.810 --rc genhtml_branch_coverage=1 00:43:47.810 --rc genhtml_function_coverage=1 00:43:47.810 --rc genhtml_legend=1 00:43:47.810 --rc geninfo_all_blocks=1 00:43:47.810 --rc geninfo_unexecuted_blocks=1 00:43:47.810 00:43:47.810 ' 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:47.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:47.810 --rc genhtml_branch_coverage=1 00:43:47.810 --rc genhtml_function_coverage=1 00:43:47.810 --rc genhtml_legend=1 00:43:47.810 --rc geninfo_all_blocks=1 00:43:47.810 --rc geninfo_unexecuted_blocks=1 00:43:47.810 00:43:47.810 ' 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:47.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:47.810 --rc genhtml_branch_coverage=1 00:43:47.810 --rc genhtml_function_coverage=1 00:43:47.810 --rc genhtml_legend=1 00:43:47.810 --rc geninfo_all_blocks=1 00:43:47.810 --rc geninfo_unexecuted_blocks=1 00:43:47.810 00:43:47.810 ' 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:47.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:47.810 --rc genhtml_branch_coverage=1 00:43:47.810 --rc genhtml_function_coverage=1 00:43:47.810 --rc genhtml_legend=1 00:43:47.810 --rc geninfo_all_blocks=1 00:43:47.810 --rc geninfo_unexecuted_blocks=1 00:43:47.810 00:43:47.810 ' 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:47.810 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:47.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:43:47.811 03:25:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:53.084 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:53.085 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:53.085 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:53.085 Found net devices under 0000:af:00.0: cvl_0_0 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:53.085 Found net devices under 0000:af:00.1: cvl_0_1 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:53.085 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:53.344 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:53.344 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:53.344 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:53.344 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:53.344 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:53.344 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:53.344 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:53.344 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:53.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:53.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:43:53.344 00:43:53.344 --- 10.0.0.2 ping statistics --- 00:43:53.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:53.344 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:43:53.344 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:53.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:53.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:43:53.344 00:43:53.344 --- 10.0.0.1 ping statistics --- 00:43:53.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:53.345 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:43:53.345 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:53.345 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:43:53.345 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:43:53.345 03:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:56.634 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:56.634 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:56.634 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:56.634 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:56.634 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:56.634 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:56.634 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:56.634 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:56.634 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:56.634 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:56.634 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:56.634 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:56.634 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:56.634 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:56.634 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:56.634 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:56.894 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:43:57.152 03:25:12 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:57.152 03:25:12 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:57.152 03:25:12 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:57.152 03:25:12 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:57.152 03:25:12 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:57.152 03:25:12 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:57.152 03:25:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:43:57.152 03:25:12 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:57.152 03:25:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:57.152 03:25:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:57.152 03:25:12 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=439417 00:43:57.152 03:25:12 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 439417 00:43:57.152 03:25:12 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:43:57.152 03:25:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 439417 ']' 00:43:57.152 03:25:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:57.152 03:25:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:57.152 03:25:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:57.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:57.152 03:25:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:57.152 03:25:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:57.152 [2024-12-14 03:25:12.251723] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:43:57.152 [2024-12-14 03:25:12.251766] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:57.411 [2024-12-14 03:25:12.329973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:57.411 [2024-12-14 03:25:12.353575] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:57.411 [2024-12-14 03:25:12.353612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:57.411 [2024-12-14 03:25:12.353619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:57.411 [2024-12-14 03:25:12.353625] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:57.411 [2024-12-14 03:25:12.353630] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:57.411 [2024-12-14 03:25:12.355036] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:57.411 [2024-12-14 03:25:12.355058] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:43:57.411 [2024-12-14 03:25:12.355143] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:57.411 [2024-12-14 03:25:12.355144] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:57.411 03:25:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:57.411 ************************************ 00:43:57.411 START TEST spdk_target_abort 00:43:57.411 ************************************ 00:43:57.411 03:25:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:43:57.411 03:25:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:43:57.411 03:25:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:43:57.411 03:25:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.411 03:25:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:00.692 spdk_targetn1 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:00.692 [2024-12-14 03:25:15.359604] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:00.692 [2024-12-14 03:25:15.403874] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:00.692 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:00.693 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:00.693 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:44:00.693 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:00.693 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:44:00.693 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:00.693 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:00.693 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:00.693 03:25:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:03.972 Initializing NVMe Controllers 00:44:03.972 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:03.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:03.972 Initialization complete. Launching workers. 00:44:03.972 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15048, failed: 0 00:44:03.972 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1318, failed to submit 13730 00:44:03.972 success 695, unsuccessful 623, failed 0 00:44:03.972 03:25:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:03.972 03:25:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:07.249 Initializing NVMe Controllers 00:44:07.249 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:07.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:07.249 Initialization complete. Launching workers. 00:44:07.249 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8538, failed: 0 00:44:07.249 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1254, failed to submit 7284 00:44:07.249 success 324, unsuccessful 930, failed 0 00:44:07.249 03:25:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:07.249 03:25:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:10.523 Initializing NVMe Controllers 00:44:10.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:10.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:10.523 Initialization complete. Launching workers. 00:44:10.523 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38898, failed: 0 00:44:10.523 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2823, failed to submit 36075 00:44:10.523 success 590, unsuccessful 2233, failed 0 00:44:10.523 03:25:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:44:10.523 03:25:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:10.523 03:25:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:10.523 03:25:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:10.523 03:25:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:44:10.523 03:25:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:10.523 03:25:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:11.454 03:25:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:11.454 03:25:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 439417 00:44:11.454 03:25:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 439417 ']' 00:44:11.454 03:25:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 439417 00:44:11.454 03:25:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:44:11.454 03:25:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:11.454 03:25:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 439417 00:44:11.454 03:25:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:11.454 03:25:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:11.454 03:25:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 439417' 00:44:11.454 killing process with pid 439417 00:44:11.454 03:25:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 439417 00:44:11.454 03:25:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 439417 00:44:11.454 00:44:11.454 real 0m14.026s 00:44:11.454 user 0m53.684s 00:44:11.454 sys 0m2.331s 00:44:11.454 03:25:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:11.454 03:25:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:11.454 ************************************ 00:44:11.454 END TEST spdk_target_abort 00:44:11.454 ************************************ 00:44:11.714 03:25:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:44:11.714 03:25:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:11.714 03:25:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:11.714 03:25:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:11.714 ************************************ 00:44:11.714 START TEST kernel_target_abort 00:44:11.714 ************************************ 00:44:11.714 03:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:44:11.714 03:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:44:11.714 03:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:44:11.714 03:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:44:11.714 03:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:44:11.714 03:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:44:11.714 03:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:44:11.714 03:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:44:11.714 03:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:44:11.714 03:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:44:11.714 03:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:44:11.714 03:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:44:11.714 03:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:44:11.714 03:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:44:11.714 03:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:44:11.714 03:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:11.714 03:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:11.714 03:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:44:11.714 03:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:44:11.714 03:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:44:11.714 03:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:44:11.714 03:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:44:11.714 03:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:14.249 Waiting for block devices as requested 00:44:14.249 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:44:14.508 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:14.508 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:14.508 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:14.768 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:14.768 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:14.768 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:15.026 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:15.026 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:15.026 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:15.284 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:15.284 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:15.284 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:15.284 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:15.542 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:15.542 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:15.542 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:44:15.800 No valid GPT data, bailing 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:44:15.800 00:44:15.800 Discovery Log Number of Records 2, Generation counter 2 00:44:15.800 =====Discovery Log Entry 0====== 00:44:15.800 trtype: tcp 00:44:15.800 adrfam: ipv4 00:44:15.800 subtype: current discovery subsystem 00:44:15.800 treq: not specified, sq flow control disable supported 00:44:15.800 portid: 1 00:44:15.800 trsvcid: 4420 00:44:15.800 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:44:15.800 traddr: 10.0.0.1 00:44:15.800 eflags: none 00:44:15.800 sectype: none 00:44:15.800 =====Discovery Log Entry 1====== 00:44:15.800 trtype: tcp 00:44:15.800 adrfam: ipv4 00:44:15.800 subtype: nvme subsystem 00:44:15.800 treq: not specified, sq flow control disable supported 00:44:15.800 portid: 1 00:44:15.800 trsvcid: 4420 00:44:15.800 subnqn: nqn.2016-06.io.spdk:testnqn 00:44:15.800 traddr: 10.0.0.1 00:44:15.800 eflags: none 00:44:15.800 sectype: none 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:15.800 03:25:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:19.083 Initializing NVMe Controllers 00:44:19.083 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:19.083 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:19.083 Initialization complete. Launching workers. 00:44:19.083 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 96432, failed: 0 00:44:19.083 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 96432, failed to submit 0 00:44:19.083 success 0, unsuccessful 96432, failed 0 00:44:19.083 03:25:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:19.083 03:25:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:22.368 Initializing NVMe Controllers 00:44:22.368 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:22.368 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:22.368 Initialization complete. Launching workers. 00:44:22.368 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 153061, failed: 0 00:44:22.368 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38402, failed to submit 114659 00:44:22.368 success 0, unsuccessful 38402, failed 0 00:44:22.368 03:25:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:22.368 03:25:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:25.653 Initializing NVMe Controllers 00:44:25.653 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:25.653 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:25.653 Initialization complete. Launching workers. 00:44:25.653 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 142941, failed: 0 00:44:25.653 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35794, failed to submit 107147 00:44:25.653 success 0, unsuccessful 35794, failed 0 00:44:25.653 03:25:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:44:25.653 03:25:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:44:25.653 03:25:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:44:25.653 03:25:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:25.653 03:25:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:25.653 03:25:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:44:25.653 03:25:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:25.653 03:25:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:44:25.653 03:25:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:44:25.653 03:25:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:28.187 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:44:28.187 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:44:28.187 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:44:28.187 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:44:28.187 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:44:28.187 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:44:28.187 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:44:28.187 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:44:28.187 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:44:28.187 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:44:28.187 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:44:28.187 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:44:28.187 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:44:28.187 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:44:28.187 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:44:28.187 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:44:29.124 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:44:29.124 00:44:29.124 real 0m17.395s 00:44:29.124 user 0m9.141s 00:44:29.124 sys 0m4.985s 00:44:29.124 03:25:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:29.124 03:25:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:29.124 ************************************ 00:44:29.124 END TEST kernel_target_abort 00:44:29.124 ************************************ 00:44:29.124 03:25:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:44:29.124 03:25:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:44:29.124 03:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:29.124 03:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:44:29.124 03:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:29.124 03:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:44:29.124 03:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:29.124 03:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:29.124 rmmod nvme_tcp 00:44:29.124 rmmod nvme_fabrics 00:44:29.124 rmmod nvme_keyring 00:44:29.124 03:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:29.124 03:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:44:29.124 03:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:44:29.124 03:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 439417 ']' 00:44:29.124 03:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 439417 00:44:29.124 03:25:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 439417 ']' 00:44:29.124 03:25:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 439417 00:44:29.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (439417) - No such process 00:44:29.124 03:25:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 439417 is not found' 00:44:29.124 Process with pid 439417 is not found 00:44:29.124 03:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:44:29.124 03:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:31.661 Waiting for block devices as requested 00:44:31.662 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:44:31.920 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:31.920 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:32.180 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:32.180 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:32.180 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:32.180 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:32.439 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:32.439 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:32.439 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:32.698 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:32.698 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:32.698 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:32.957 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:32.957 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:32.957 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:32.957 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:33.216 03:25:48 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:33.216 03:25:48 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:33.216 03:25:48 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:44:33.216 03:25:48 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:44:33.216 03:25:48 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:33.216 03:25:48 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:44:33.216 03:25:48 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:33.216 03:25:48 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:33.216 03:25:48 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:33.216 03:25:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:33.216 03:25:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:35.118 03:25:50 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:35.118 00:44:35.118 real 0m47.814s 00:44:35.118 user 1m7.168s 00:44:35.118 sys 0m15.875s 00:44:35.118 03:25:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:35.118 03:25:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:35.118 ************************************ 00:44:35.118 END TEST nvmf_abort_qd_sizes 00:44:35.118 ************************************ 00:44:35.383 03:25:50 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:35.383 03:25:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:35.383 03:25:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:35.383 03:25:50 -- common/autotest_common.sh@10 -- # set +x 00:44:35.383 ************************************ 00:44:35.383 START TEST keyring_file 00:44:35.383 ************************************ 00:44:35.383 03:25:50 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:35.383 * Looking for test storage... 00:44:35.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:35.383 03:25:50 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:35.383 03:25:50 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:44:35.383 03:25:50 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:35.383 03:25:50 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@345 -- # : 1 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@353 -- # local d=1 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@355 -- # echo 1 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@353 -- # local d=2 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@355 -- # echo 2 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:35.383 03:25:50 keyring_file -- scripts/common.sh@368 -- # return 0 00:44:35.383 03:25:50 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:35.383 03:25:50 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:35.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.383 --rc genhtml_branch_coverage=1 00:44:35.383 --rc genhtml_function_coverage=1 00:44:35.383 --rc genhtml_legend=1 00:44:35.383 --rc geninfo_all_blocks=1 00:44:35.383 --rc geninfo_unexecuted_blocks=1 00:44:35.383 00:44:35.383 ' 00:44:35.383 03:25:50 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:35.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.383 --rc genhtml_branch_coverage=1 00:44:35.383 --rc genhtml_function_coverage=1 00:44:35.383 --rc genhtml_legend=1 00:44:35.383 --rc geninfo_all_blocks=1 00:44:35.383 --rc geninfo_unexecuted_blocks=1 00:44:35.383 00:44:35.383 ' 00:44:35.383 03:25:50 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:35.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.383 --rc genhtml_branch_coverage=1 00:44:35.383 --rc genhtml_function_coverage=1 00:44:35.383 --rc genhtml_legend=1 00:44:35.383 --rc geninfo_all_blocks=1 00:44:35.383 --rc geninfo_unexecuted_blocks=1 00:44:35.383 00:44:35.383 ' 00:44:35.383 03:25:50 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:35.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.383 --rc genhtml_branch_coverage=1 00:44:35.383 --rc genhtml_function_coverage=1 00:44:35.383 --rc genhtml_legend=1 00:44:35.383 --rc geninfo_all_blocks=1 00:44:35.383 --rc geninfo_unexecuted_blocks=1 00:44:35.383 00:44:35.383 ' 00:44:35.383 03:25:50 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:35.383 03:25:50 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:35.383 03:25:50 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:44:35.383 03:25:50 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:35.383 03:25:50 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:35.383 03:25:50 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:35.383 03:25:50 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:35.383 03:25:50 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:35.383 03:25:50 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:35.383 03:25:50 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:35.383 03:25:50 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:35.383 03:25:50 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:35.383 03:25:50 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:35.682 03:25:50 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:35.682 03:25:50 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:35.682 03:25:50 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:35.682 03:25:50 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:35.682 03:25:50 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:35.682 03:25:50 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:35.682 03:25:50 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:35.682 03:25:50 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:44:35.682 03:25:50 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:35.682 03:25:50 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:35.682 03:25:50 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:35.682 03:25:50 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.682 03:25:50 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.682 03:25:50 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.682 03:25:50 keyring_file -- paths/export.sh@5 -- # export PATH 00:44:35.682 03:25:50 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.682 03:25:50 keyring_file -- nvmf/common.sh@51 -- # : 0 00:44:35.682 03:25:50 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:35.682 03:25:50 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:35.682 03:25:50 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:35.682 03:25:50 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:35.682 03:25:50 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:35.682 03:25:50 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:35.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:35.682 03:25:50 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:35.682 03:25:50 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:35.682 03:25:50 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:35.682 03:25:50 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:35.683 03:25:50 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:35.683 03:25:50 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:35.683 03:25:50 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:44:35.683 03:25:50 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:44:35.683 03:25:50 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:44:35.683 03:25:50 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:35.683 03:25:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:35.683 03:25:50 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:35.683 03:25:50 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:35.683 03:25:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:35.683 03:25:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:35.683 03:25:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.eRA2wvGU7C 00:44:35.683 03:25:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:35.683 03:25:50 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:35.683 03:25:50 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:35.683 03:25:50 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:35.683 03:25:50 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:35.683 03:25:50 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:35.683 03:25:50 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:35.683 03:25:50 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.eRA2wvGU7C 00:44:35.683 03:25:50 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.eRA2wvGU7C 00:44:35.683 03:25:50 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.eRA2wvGU7C 00:44:35.683 03:25:50 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:44:35.683 03:25:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:35.683 03:25:50 keyring_file -- keyring/common.sh@17 -- # name=key1 00:44:35.683 03:25:50 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:35.683 03:25:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:35.683 03:25:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:35.683 03:25:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.go0dHlxbxR 00:44:35.683 03:25:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:35.683 03:25:50 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:35.683 03:25:50 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:35.683 03:25:50 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:35.683 03:25:50 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:35.683 03:25:50 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:35.683 03:25:50 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:35.683 03:25:50 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.go0dHlxbxR 00:44:35.683 03:25:50 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.go0dHlxbxR 00:44:35.683 03:25:50 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.go0dHlxbxR 00:44:35.683 03:25:50 keyring_file -- keyring/file.sh@30 -- # tgtpid=442500 00:44:35.683 03:25:50 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:35.683 03:25:50 keyring_file -- keyring/file.sh@32 -- # waitforlisten 442500 00:44:35.683 03:25:50 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 442500 ']' 00:44:35.683 03:25:50 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:35.683 03:25:50 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:35.683 03:25:50 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:35.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:35.683 03:25:50 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:35.683 03:25:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:35.683 [2024-12-14 03:25:50.687705] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:35.683 [2024-12-14 03:25:50.687755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442500 ] 00:44:35.683 [2024-12-14 03:25:50.761294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:35.683 [2024-12-14 03:25:50.783504] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:35.990 03:25:50 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:35.990 03:25:50 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:35.990 03:25:50 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:44:35.990 03:25:50 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:35.990 03:25:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:35.990 [2024-12-14 03:25:50.999484] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:35.990 null0 00:44:35.990 [2024-12-14 03:25:51.031535] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:35.990 [2024-12-14 03:25:51.031800] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:35.990 03:25:51 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:35.990 03:25:51 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:35.990 03:25:51 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:35.990 03:25:51 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:35.990 03:25:51 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:44:35.990 03:25:51 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:35.990 03:25:51 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:44:35.990 03:25:51 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:35.990 03:25:51 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:35.990 03:25:51 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:35.990 03:25:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:35.990 [2024-12-14 03:25:51.063616] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:44:35.990 request: 00:44:35.990 { 00:44:35.990 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:44:35.990 "secure_channel": false, 00:44:35.990 "listen_address": { 00:44:35.990 "trtype": "tcp", 00:44:35.990 "traddr": "127.0.0.1", 00:44:35.990 "trsvcid": "4420" 00:44:35.990 }, 00:44:35.990 "method": "nvmf_subsystem_add_listener", 00:44:35.990 "req_id": 1 00:44:35.990 } 00:44:35.990 Got JSON-RPC error response 00:44:35.990 response: 00:44:35.990 { 00:44:35.990 "code": -32602, 00:44:35.990 "message": "Invalid parameters" 00:44:35.990 } 00:44:35.990 03:25:51 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:44:35.990 03:25:51 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:35.990 03:25:51 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:35.990 03:25:51 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:35.990 03:25:51 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:35.990 03:25:51 keyring_file -- keyring/file.sh@47 -- # bperfpid=442511 00:44:35.990 03:25:51 keyring_file -- keyring/file.sh@49 -- # waitforlisten 442511 /var/tmp/bperf.sock 00:44:35.990 03:25:51 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:44:35.990 03:25:51 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 442511 ']' 00:44:35.990 03:25:51 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:35.990 03:25:51 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:35.990 03:25:51 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:35.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:35.990 03:25:51 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:35.990 03:25:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:36.260 [2024-12-14 03:25:51.118865] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:36.260 [2024-12-14 03:25:51.118920] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442511 ] 00:44:36.260 [2024-12-14 03:25:51.194694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:36.260 [2024-12-14 03:25:51.216739] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:36.260 03:25:51 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:36.260 03:25:51 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:36.260 03:25:51 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eRA2wvGU7C 00:44:36.260 03:25:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eRA2wvGU7C 00:44:36.528 03:25:51 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.go0dHlxbxR 00:44:36.528 03:25:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.go0dHlxbxR 00:44:36.801 03:25:51 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:44:36.801 03:25:51 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:44:36.801 03:25:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:36.801 03:25:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:36.801 03:25:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:36.801 03:25:51 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.eRA2wvGU7C == \/\t\m\p\/\t\m\p\.\e\R\A\2\w\v\G\U\7\C ]] 00:44:36.801 03:25:51 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:44:36.801 03:25:51 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:44:36.801 03:25:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:36.801 03:25:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:36.801 03:25:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:37.094 03:25:52 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.go0dHlxbxR == \/\t\m\p\/\t\m\p\.\g\o\0\d\H\l\x\b\x\R ]] 00:44:37.094 03:25:52 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:44:37.094 03:25:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:37.094 03:25:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:37.094 03:25:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:37.094 03:25:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:37.094 03:25:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:37.352 03:25:52 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:44:37.352 03:25:52 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:44:37.352 03:25:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:37.352 03:25:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:37.352 03:25:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:37.352 03:25:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:37.352 03:25:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:37.352 03:25:52 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:44:37.352 03:25:52 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:37.352 03:25:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:37.609 [2024-12-14 03:25:52.638122] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:37.609 nvme0n1 00:44:37.609 03:25:52 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:44:37.609 03:25:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:37.609 03:25:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:37.609 03:25:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:37.609 03:25:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:37.609 03:25:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:37.866 03:25:52 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:44:37.866 03:25:52 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:44:37.866 03:25:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:37.866 03:25:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:37.866 03:25:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:37.866 03:25:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:37.866 03:25:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:38.124 03:25:53 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:44:38.124 03:25:53 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:38.124 Running I/O for 1 seconds... 00:44:39.495 19444.00 IOPS, 75.95 MiB/s 00:44:39.495 Latency(us) 00:44:39.495 [2024-12-14T02:25:54.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:39.496 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:44:39.496 nvme0n1 : 1.00 19491.08 76.14 0.00 0.00 6555.37 2839.89 12233.39 00:44:39.496 [2024-12-14T02:25:54.629Z] =================================================================================================================== 00:44:39.496 [2024-12-14T02:25:54.629Z] Total : 19491.08 76.14 0.00 0.00 6555.37 2839.89 12233.39 00:44:39.496 { 00:44:39.496 "results": [ 00:44:39.496 { 00:44:39.496 "job": "nvme0n1", 00:44:39.496 "core_mask": "0x2", 00:44:39.496 "workload": "randrw", 00:44:39.496 "percentage": 50, 00:44:39.496 "status": "finished", 00:44:39.496 "queue_depth": 128, 00:44:39.496 "io_size": 4096, 00:44:39.496 "runtime": 1.004203, 00:44:39.496 "iops": 19491.078994984084, 00:44:39.496 "mibps": 76.13702732415658, 00:44:39.496 "io_failed": 0, 00:44:39.496 "io_timeout": 0, 00:44:39.496 "avg_latency_us": 6555.368612447176, 00:44:39.496 "min_latency_us": 2839.8933333333334, 00:44:39.496 "max_latency_us": 12233.386666666667 00:44:39.496 } 00:44:39.496 ], 00:44:39.496 "core_count": 1 00:44:39.496 } 00:44:39.496 03:25:54 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:39.496 03:25:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:39.496 03:25:54 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:44:39.496 03:25:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:39.496 03:25:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:39.496 03:25:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:39.496 03:25:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:39.496 03:25:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:39.753 03:25:54 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:44:39.753 03:25:54 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:44:39.753 03:25:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:39.753 03:25:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:39.753 03:25:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:39.753 03:25:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:39.753 03:25:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:39.753 03:25:54 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:44:39.753 03:25:54 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:39.753 03:25:54 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:39.753 03:25:54 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:39.753 03:25:54 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:39.753 03:25:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:39.753 03:25:54 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:39.753 03:25:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:39.753 03:25:54 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:39.754 03:25:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:40.011 [2024-12-14 03:25:55.013642] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:40.011 [2024-12-14 03:25:55.014166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b56a0 (107): Transport endpoint is not connected 00:44:40.011 [2024-12-14 03:25:55.015159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b56a0 (9): Bad file descriptor 00:44:40.011 [2024-12-14 03:25:55.016160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:40.011 [2024-12-14 03:25:55.016170] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:40.011 [2024-12-14 03:25:55.016177] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:40.011 [2024-12-14 03:25:55.016185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:40.011 request: 00:44:40.011 { 00:44:40.011 "name": "nvme0", 00:44:40.011 "trtype": "tcp", 00:44:40.011 "traddr": "127.0.0.1", 00:44:40.011 "adrfam": "ipv4", 00:44:40.011 "trsvcid": "4420", 00:44:40.011 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:40.011 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:40.011 "prchk_reftag": false, 00:44:40.011 "prchk_guard": false, 00:44:40.011 "hdgst": false, 00:44:40.011 "ddgst": false, 00:44:40.011 "psk": "key1", 00:44:40.011 "allow_unrecognized_csi": false, 00:44:40.011 "method": "bdev_nvme_attach_controller", 00:44:40.011 "req_id": 1 00:44:40.011 } 00:44:40.011 Got JSON-RPC error response 00:44:40.011 response: 00:44:40.011 { 00:44:40.011 "code": -5, 00:44:40.011 "message": "Input/output error" 00:44:40.011 } 00:44:40.011 03:25:55 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:40.011 03:25:55 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:40.011 03:25:55 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:40.011 03:25:55 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:40.011 03:25:55 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:44:40.011 03:25:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:40.011 03:25:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:40.011 03:25:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:40.011 03:25:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:40.011 03:25:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:40.267 03:25:55 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:44:40.267 03:25:55 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:44:40.267 03:25:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:40.267 03:25:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:40.267 03:25:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:40.267 03:25:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:40.267 03:25:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:40.524 03:25:55 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:44:40.524 03:25:55 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:44:40.524 03:25:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:40.524 03:25:55 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:44:40.524 03:25:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:44:40.781 03:25:55 keyring_file -- keyring/file.sh@78 -- # jq length 00:44:40.781 03:25:55 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:44:40.782 03:25:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:41.039 03:25:56 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:44:41.039 03:25:56 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.eRA2wvGU7C 00:44:41.039 03:25:56 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.eRA2wvGU7C 00:44:41.039 03:25:56 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:41.039 03:25:56 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.eRA2wvGU7C 00:44:41.039 03:25:56 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:41.039 03:25:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:41.039 03:25:56 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:41.039 03:25:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:41.039 03:25:56 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eRA2wvGU7C 00:44:41.039 03:25:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eRA2wvGU7C 00:44:41.296 [2024-12-14 03:25:56.183307] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.eRA2wvGU7C': 0100660 00:44:41.296 [2024-12-14 03:25:56.183337] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:44:41.296 request: 00:44:41.296 { 00:44:41.296 "name": "key0", 00:44:41.296 "path": "/tmp/tmp.eRA2wvGU7C", 00:44:41.296 "method": "keyring_file_add_key", 00:44:41.296 "req_id": 1 00:44:41.296 } 00:44:41.296 Got JSON-RPC error response 00:44:41.296 response: 00:44:41.296 { 00:44:41.296 "code": -1, 00:44:41.296 "message": "Operation not permitted" 00:44:41.296 } 00:44:41.296 03:25:56 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:41.296 03:25:56 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:41.296 03:25:56 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:41.296 03:25:56 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:41.296 03:25:56 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.eRA2wvGU7C 00:44:41.296 03:25:56 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eRA2wvGU7C 00:44:41.296 03:25:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eRA2wvGU7C 00:44:41.296 03:25:56 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.eRA2wvGU7C 00:44:41.296 03:25:56 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:44:41.296 03:25:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:41.296 03:25:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:41.296 03:25:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:41.296 03:25:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:41.296 03:25:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:41.554 03:25:56 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:44:41.554 03:25:56 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:41.554 03:25:56 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:41.554 03:25:56 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:41.554 03:25:56 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:41.554 03:25:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:41.554 03:25:56 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:41.554 03:25:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:41.554 03:25:56 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:41.554 03:25:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:41.811 [2024-12-14 03:25:56.804943] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.eRA2wvGU7C': No such file or directory 00:44:41.811 [2024-12-14 03:25:56.804970] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:44:41.811 [2024-12-14 03:25:56.804985] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:44:41.811 [2024-12-14 03:25:56.804992] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:44:41.811 [2024-12-14 03:25:56.804999] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:44:41.811 [2024-12-14 03:25:56.805005] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:44:41.811 request: 00:44:41.811 { 00:44:41.811 "name": "nvme0", 00:44:41.811 "trtype": "tcp", 00:44:41.812 "traddr": "127.0.0.1", 00:44:41.812 "adrfam": "ipv4", 00:44:41.812 "trsvcid": "4420", 00:44:41.812 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:41.812 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:41.812 "prchk_reftag": false, 00:44:41.812 "prchk_guard": false, 00:44:41.812 "hdgst": false, 00:44:41.812 "ddgst": false, 00:44:41.812 "psk": "key0", 00:44:41.812 "allow_unrecognized_csi": false, 00:44:41.812 "method": "bdev_nvme_attach_controller", 00:44:41.812 "req_id": 1 00:44:41.812 } 00:44:41.812 Got JSON-RPC error response 00:44:41.812 response: 00:44:41.812 { 00:44:41.812 "code": -19, 00:44:41.812 "message": "No such device" 00:44:41.812 } 00:44:41.812 03:25:56 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:41.812 03:25:56 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:41.812 03:25:56 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:41.812 03:25:56 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:41.812 03:25:56 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:44:41.812 03:25:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:42.069 03:25:56 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:42.069 03:25:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:42.069 03:25:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:42.069 03:25:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:42.069 03:25:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:42.069 03:25:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:42.069 03:25:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.iFc2npiPEr 00:44:42.069 03:25:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:42.069 03:25:56 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:42.069 03:25:56 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:42.069 03:25:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:42.069 03:25:57 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:42.069 03:25:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:42.069 03:25:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:42.069 03:25:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.iFc2npiPEr 00:44:42.069 03:25:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.iFc2npiPEr 00:44:42.069 03:25:57 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.iFc2npiPEr 00:44:42.069 03:25:57 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iFc2npiPEr 00:44:42.069 03:25:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iFc2npiPEr 00:44:42.327 03:25:57 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:42.327 03:25:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:42.584 nvme0n1 00:44:42.584 03:25:57 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:44:42.584 03:25:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:42.584 03:25:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:42.584 03:25:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:42.584 03:25:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:42.584 03:25:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:42.584 03:25:57 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:44:42.584 03:25:57 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:44:42.584 03:25:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:42.842 03:25:57 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:44:42.842 03:25:57 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:44:42.842 03:25:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:42.842 03:25:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:42.842 03:25:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:43.099 03:25:58 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:44:43.099 03:25:58 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:44:43.099 03:25:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:43.099 03:25:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:43.099 03:25:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:43.099 03:25:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:43.099 03:25:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:43.357 03:25:58 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:44:43.357 03:25:58 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:43.357 03:25:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:43.357 03:25:58 keyring_file -- keyring/file.sh@105 -- # jq length 00:44:43.357 03:25:58 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:44:43.357 03:25:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:43.614 03:25:58 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:44:43.614 03:25:58 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iFc2npiPEr 00:44:43.614 03:25:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iFc2npiPEr 00:44:43.871 03:25:58 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.go0dHlxbxR 00:44:43.871 03:25:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.go0dHlxbxR 00:44:44.129 03:25:59 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:44.129 03:25:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:44.386 nvme0n1 00:44:44.386 03:25:59 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:44:44.386 03:25:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:44:44.644 03:25:59 keyring_file -- keyring/file.sh@113 -- # config='{ 00:44:44.644 "subsystems": [ 00:44:44.644 { 00:44:44.644 "subsystem": "keyring", 00:44:44.644 "config": [ 00:44:44.644 { 00:44:44.644 "method": "keyring_file_add_key", 00:44:44.644 "params": { 00:44:44.644 "name": "key0", 00:44:44.644 "path": "/tmp/tmp.iFc2npiPEr" 00:44:44.644 } 00:44:44.644 }, 00:44:44.644 { 00:44:44.644 "method": "keyring_file_add_key", 00:44:44.644 "params": { 00:44:44.644 "name": "key1", 00:44:44.644 "path": "/tmp/tmp.go0dHlxbxR" 00:44:44.644 } 00:44:44.644 } 00:44:44.644 ] 00:44:44.644 }, 00:44:44.644 { 00:44:44.644 "subsystem": "iobuf", 00:44:44.644 "config": [ 00:44:44.644 { 00:44:44.644 "method": "iobuf_set_options", 00:44:44.644 "params": { 00:44:44.644 "small_pool_count": 8192, 00:44:44.644 "large_pool_count": 1024, 00:44:44.644 "small_bufsize": 8192, 00:44:44.644 "large_bufsize": 135168, 00:44:44.644 "enable_numa": false 00:44:44.644 } 00:44:44.644 } 00:44:44.644 ] 00:44:44.644 }, 00:44:44.644 { 00:44:44.644 "subsystem": "sock", 00:44:44.644 "config": [ 00:44:44.644 { 00:44:44.644 "method": "sock_set_default_impl", 00:44:44.644 "params": { 00:44:44.644 "impl_name": "posix" 00:44:44.644 } 00:44:44.645 }, 00:44:44.645 { 00:44:44.645 "method": "sock_impl_set_options", 00:44:44.645 "params": { 00:44:44.645 "impl_name": "ssl", 00:44:44.645 "recv_buf_size": 4096, 00:44:44.645 "send_buf_size": 4096, 00:44:44.645 "enable_recv_pipe": true, 00:44:44.645 "enable_quickack": false, 00:44:44.645 "enable_placement_id": 0, 00:44:44.645 "enable_zerocopy_send_server": true, 00:44:44.645 "enable_zerocopy_send_client": false, 00:44:44.645 "zerocopy_threshold": 0, 00:44:44.645 "tls_version": 0, 00:44:44.645 "enable_ktls": false 00:44:44.645 } 00:44:44.645 }, 00:44:44.645 { 00:44:44.645 "method": "sock_impl_set_options", 00:44:44.645 "params": { 00:44:44.645 "impl_name": "posix", 00:44:44.645 "recv_buf_size": 2097152, 00:44:44.645 "send_buf_size": 2097152, 00:44:44.645 "enable_recv_pipe": true, 00:44:44.645 "enable_quickack": false, 00:44:44.645 "enable_placement_id": 0, 00:44:44.645 "enable_zerocopy_send_server": true, 00:44:44.645 "enable_zerocopy_send_client": false, 00:44:44.645 "zerocopy_threshold": 0, 00:44:44.645 "tls_version": 0, 00:44:44.645 "enable_ktls": false 00:44:44.645 } 00:44:44.645 } 00:44:44.645 ] 00:44:44.645 }, 00:44:44.645 { 00:44:44.645 "subsystem": "vmd", 00:44:44.645 "config": [] 00:44:44.645 }, 00:44:44.645 { 00:44:44.645 "subsystem": "accel", 00:44:44.645 "config": [ 00:44:44.645 { 00:44:44.645 "method": "accel_set_options", 00:44:44.645 "params": { 00:44:44.645 "small_cache_size": 128, 00:44:44.645 "large_cache_size": 16, 00:44:44.645 "task_count": 2048, 00:44:44.645 "sequence_count": 2048, 00:44:44.645 "buf_count": 2048 00:44:44.645 } 00:44:44.645 } 00:44:44.645 ] 00:44:44.645 }, 00:44:44.645 { 00:44:44.645 "subsystem": "bdev", 00:44:44.645 "config": [ 00:44:44.645 { 00:44:44.645 "method": "bdev_set_options", 00:44:44.645 "params": { 00:44:44.645 "bdev_io_pool_size": 65535, 00:44:44.645 "bdev_io_cache_size": 256, 00:44:44.645 "bdev_auto_examine": true, 00:44:44.645 "iobuf_small_cache_size": 128, 00:44:44.645 "iobuf_large_cache_size": 16 00:44:44.645 } 00:44:44.645 }, 00:44:44.645 { 00:44:44.645 "method": "bdev_raid_set_options", 00:44:44.645 "params": { 00:44:44.645 "process_window_size_kb": 1024, 00:44:44.645 "process_max_bandwidth_mb_sec": 0 00:44:44.645 } 00:44:44.645 }, 00:44:44.645 { 00:44:44.645 "method": "bdev_iscsi_set_options", 00:44:44.645 "params": { 00:44:44.645 "timeout_sec": 30 00:44:44.645 } 00:44:44.645 }, 00:44:44.645 { 00:44:44.645 "method": "bdev_nvme_set_options", 00:44:44.645 "params": { 00:44:44.645 "action_on_timeout": "none", 00:44:44.645 "timeout_us": 0, 00:44:44.645 "timeout_admin_us": 0, 00:44:44.645 "keep_alive_timeout_ms": 10000, 00:44:44.645 "arbitration_burst": 0, 00:44:44.645 "low_priority_weight": 0, 00:44:44.645 "medium_priority_weight": 0, 00:44:44.645 "high_priority_weight": 0, 00:44:44.645 "nvme_adminq_poll_period_us": 10000, 00:44:44.645 "nvme_ioq_poll_period_us": 0, 00:44:44.645 "io_queue_requests": 512, 00:44:44.645 "delay_cmd_submit": true, 00:44:44.645 "transport_retry_count": 4, 00:44:44.645 "bdev_retry_count": 3, 00:44:44.645 "transport_ack_timeout": 0, 00:44:44.645 "ctrlr_loss_timeout_sec": 0, 00:44:44.645 "reconnect_delay_sec": 0, 00:44:44.645 "fast_io_fail_timeout_sec": 0, 00:44:44.645 "disable_auto_failback": false, 00:44:44.645 "generate_uuids": false, 00:44:44.645 "transport_tos": 0, 00:44:44.645 "nvme_error_stat": false, 00:44:44.645 "rdma_srq_size": 0, 00:44:44.645 "io_path_stat": false, 00:44:44.645 "allow_accel_sequence": false, 00:44:44.645 "rdma_max_cq_size": 0, 00:44:44.645 "rdma_cm_event_timeout_ms": 0, 00:44:44.645 "dhchap_digests": [ 00:44:44.645 "sha256", 00:44:44.645 "sha384", 00:44:44.645 "sha512" 00:44:44.645 ], 00:44:44.645 "dhchap_dhgroups": [ 00:44:44.645 "null", 00:44:44.645 "ffdhe2048", 00:44:44.645 "ffdhe3072", 00:44:44.645 "ffdhe4096", 00:44:44.645 "ffdhe6144", 00:44:44.645 "ffdhe8192" 00:44:44.645 ], 00:44:44.645 "rdma_umr_per_io": false 00:44:44.645 } 00:44:44.645 }, 00:44:44.645 { 00:44:44.645 "method": "bdev_nvme_attach_controller", 00:44:44.645 "params": { 00:44:44.645 "name": "nvme0", 00:44:44.645 "trtype": "TCP", 00:44:44.645 "adrfam": "IPv4", 00:44:44.645 "traddr": "127.0.0.1", 00:44:44.645 "trsvcid": "4420", 00:44:44.645 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:44.645 "prchk_reftag": false, 00:44:44.645 "prchk_guard": false, 00:44:44.645 "ctrlr_loss_timeout_sec": 0, 00:44:44.645 "reconnect_delay_sec": 0, 00:44:44.645 "fast_io_fail_timeout_sec": 0, 00:44:44.645 "psk": "key0", 00:44:44.645 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:44.645 "hdgst": false, 00:44:44.645 "ddgst": false, 00:44:44.645 "multipath": "multipath" 00:44:44.645 } 00:44:44.645 }, 00:44:44.645 { 00:44:44.645 "method": "bdev_nvme_set_hotplug", 00:44:44.645 "params": { 00:44:44.645 "period_us": 100000, 00:44:44.645 "enable": false 00:44:44.645 } 00:44:44.645 }, 00:44:44.645 { 00:44:44.645 "method": "bdev_wait_for_examine" 00:44:44.645 } 00:44:44.645 ] 00:44:44.645 }, 00:44:44.645 { 00:44:44.645 "subsystem": "nbd", 00:44:44.645 "config": [] 00:44:44.645 } 00:44:44.645 ] 00:44:44.645 }' 00:44:44.645 03:25:59 keyring_file -- keyring/file.sh@115 -- # killprocess 442511 00:44:44.645 03:25:59 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 442511 ']' 00:44:44.645 03:25:59 keyring_file -- common/autotest_common.sh@958 -- # kill -0 442511 00:44:44.645 03:25:59 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:44.645 03:25:59 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:44.645 03:25:59 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 442511 00:44:44.645 03:25:59 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:44.645 03:25:59 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:44.645 03:25:59 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 442511' 00:44:44.645 killing process with pid 442511 00:44:44.645 03:25:59 keyring_file -- common/autotest_common.sh@973 -- # kill 442511 00:44:44.645 Received shutdown signal, test time was about 1.000000 seconds 00:44:44.645 00:44:44.645 Latency(us) 00:44:44.645 [2024-12-14T02:25:59.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:44.645 [2024-12-14T02:25:59.778Z] =================================================================================================================== 00:44:44.645 [2024-12-14T02:25:59.778Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:44.645 03:25:59 keyring_file -- common/autotest_common.sh@978 -- # wait 442511 00:44:44.645 03:25:59 keyring_file -- keyring/file.sh@118 -- # bperfpid=442745 00:44:44.645 03:25:59 keyring_file -- keyring/file.sh@120 -- # waitforlisten 442745 /var/tmp/bperf.sock 00:44:44.645 03:25:59 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 442745 ']' 00:44:44.645 03:25:59 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:44.645 03:25:59 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:44:44.645 03:25:59 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:44.645 03:25:59 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:44.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:44.645 03:25:59 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:44:44.645 "subsystems": [ 00:44:44.645 { 00:44:44.645 "subsystem": "keyring", 00:44:44.645 "config": [ 00:44:44.645 { 00:44:44.645 "method": "keyring_file_add_key", 00:44:44.645 "params": { 00:44:44.645 "name": "key0", 00:44:44.645 "path": "/tmp/tmp.iFc2npiPEr" 00:44:44.645 } 00:44:44.645 }, 00:44:44.645 { 00:44:44.645 "method": "keyring_file_add_key", 00:44:44.645 "params": { 00:44:44.645 "name": "key1", 00:44:44.645 "path": "/tmp/tmp.go0dHlxbxR" 00:44:44.645 } 00:44:44.645 } 00:44:44.645 ] 00:44:44.645 }, 00:44:44.645 { 00:44:44.645 "subsystem": "iobuf", 00:44:44.645 "config": [ 00:44:44.645 { 00:44:44.645 "method": "iobuf_set_options", 00:44:44.645 "params": { 00:44:44.645 "small_pool_count": 8192, 00:44:44.645 "large_pool_count": 1024, 00:44:44.645 "small_bufsize": 8192, 00:44:44.645 "large_bufsize": 135168, 00:44:44.646 "enable_numa": false 00:44:44.646 } 00:44:44.646 } 00:44:44.646 ] 00:44:44.646 }, 00:44:44.646 { 00:44:44.646 "subsystem": "sock", 00:44:44.646 "config": [ 00:44:44.646 { 00:44:44.646 "method": "sock_set_default_impl", 00:44:44.646 "params": { 00:44:44.646 "impl_name": "posix" 00:44:44.646 } 00:44:44.646 }, 00:44:44.646 { 00:44:44.646 "method": "sock_impl_set_options", 00:44:44.646 "params": { 00:44:44.646 "impl_name": "ssl", 00:44:44.646 "recv_buf_size": 4096, 00:44:44.646 "send_buf_size": 4096, 00:44:44.646 "enable_recv_pipe": true, 00:44:44.646 "enable_quickack": false, 00:44:44.646 "enable_placement_id": 0, 00:44:44.646 "enable_zerocopy_send_server": true, 00:44:44.646 "enable_zerocopy_send_client": false, 00:44:44.646 "zerocopy_threshold": 0, 00:44:44.646 "tls_version": 0, 00:44:44.646 "enable_ktls": false 00:44:44.646 } 00:44:44.646 }, 00:44:44.646 { 00:44:44.646 "method": "sock_impl_set_options", 00:44:44.646 "params": { 00:44:44.646 "impl_name": "posix", 00:44:44.646 "recv_buf_size": 2097152, 00:44:44.646 "send_buf_size": 2097152, 00:44:44.646 "enable_recv_pipe": true, 00:44:44.646 "enable_quickack": false, 00:44:44.646 "enable_placement_id": 0, 00:44:44.646 "enable_zerocopy_send_server": true, 00:44:44.646 "enable_zerocopy_send_client": false, 00:44:44.646 "zerocopy_threshold": 0, 00:44:44.646 "tls_version": 0, 00:44:44.646 "enable_ktls": false 00:44:44.646 } 00:44:44.646 } 00:44:44.646 ] 00:44:44.646 }, 00:44:44.646 { 00:44:44.646 "subsystem": "vmd", 00:44:44.646 "config": [] 00:44:44.646 }, 00:44:44.646 { 00:44:44.646 "subsystem": "accel", 00:44:44.646 "config": [ 00:44:44.646 { 00:44:44.646 "method": "accel_set_options", 00:44:44.646 "params": { 00:44:44.646 "small_cache_size": 128, 00:44:44.646 "large_cache_size": 16, 00:44:44.646 "task_count": 2048, 00:44:44.646 "sequence_count": 2048, 00:44:44.646 "buf_count": 2048 00:44:44.646 } 00:44:44.646 } 00:44:44.646 ] 00:44:44.646 }, 00:44:44.646 { 00:44:44.646 "subsystem": "bdev", 00:44:44.646 "config": [ 00:44:44.646 { 00:44:44.646 "method": "bdev_set_options", 00:44:44.646 "params": { 00:44:44.646 "bdev_io_pool_size": 65535, 00:44:44.646 "bdev_io_cache_size": 256, 00:44:44.646 "bdev_auto_examine": true, 00:44:44.646 "iobuf_small_cache_size": 128, 00:44:44.646 "iobuf_large_cache_size": 16 00:44:44.646 } 00:44:44.646 }, 00:44:44.646 { 00:44:44.646 "method": "bdev_raid_set_options", 00:44:44.646 "params": { 00:44:44.646 "process_window_size_kb": 1024, 00:44:44.646 "process_max_bandwidth_mb_sec": 0 00:44:44.646 } 00:44:44.646 }, 00:44:44.646 { 00:44:44.646 "method": "bdev_iscsi_set_options", 00:44:44.646 "params": { 00:44:44.646 "timeout_sec": 30 00:44:44.646 } 00:44:44.646 }, 00:44:44.646 { 00:44:44.646 "method": "bdev_nvme_set_options", 00:44:44.646 "params": { 00:44:44.646 "action_on_timeout": "none", 00:44:44.646 "timeout_us": 0, 00:44:44.646 "timeout_admin_us": 0, 00:44:44.646 "keep_alive_timeout_ms": 10000, 00:44:44.646 "arbitration_burst": 0, 00:44:44.646 "low_priority_weight": 0, 00:44:44.646 "medium_priority_weight": 0, 00:44:44.646 "high_priority_weight": 0, 00:44:44.646 "nvme_adminq_poll_period_us": 10000, 00:44:44.646 "nvme_ioq_poll_period_us": 0, 00:44:44.646 "io_queue_requests": 512, 00:44:44.646 "delay_cmd_submit": true, 00:44:44.646 "transport_retry_count": 4, 00:44:44.646 "bdev_retry_count": 3, 00:44:44.646 "transport_ack_timeout": 0, 00:44:44.646 "ctrlr_loss_timeout_sec": 0, 00:44:44.646 "reconnect_delay_sec": 0, 00:44:44.646 "fast_io_fail_timeout_sec": 0, 00:44:44.646 "disable_auto_failback": false, 00:44:44.646 "generate_uuids": false, 00:44:44.646 "transport_tos": 0, 00:44:44.646 "nvme_error_stat": false, 00:44:44.646 "rdma_srq_size": 0, 00:44:44.646 "io_path_stat": false, 00:44:44.646 "allow_accel_sequence": false, 00:44:44.646 "rdma_max_cq_size": 0, 00:44:44.646 "rdma_cm_event_timeout_ms": 0, 00:44:44.646 "dhchap_digests": [ 00:44:44.646 "sha256", 00:44:44.646 "sha384", 00:44:44.646 "sha512" 00:44:44.646 ], 00:44:44.646 "dhchap_dhgroups": [ 00:44:44.646 "null", 00:44:44.646 "ffdhe2048", 00:44:44.646 "ffdhe3072", 00:44:44.646 "ffdhe4096", 00:44:44.646 "ffdhe6144", 00:44:44.646 "ffdhe8192" 00:44:44.646 ], 00:44:44.646 "rdma_umr_per_io": false 00:44:44.646 } 00:44:44.646 }, 00:44:44.646 { 00:44:44.646 "method": "bdev_nvme_attach_controller", 00:44:44.646 "params": { 00:44:44.646 "name": "nvme0", 00:44:44.646 "trtype": "TCP", 00:44:44.646 "adrfam": "IPv4", 00:44:44.646 "traddr": "127.0.0.1", 00:44:44.646 "trsvcid": "4420", 00:44:44.646 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:44.646 "prchk_reftag": false, 00:44:44.646 "prchk_guard": false, 00:44:44.646 "ctrlr_loss_timeout_sec": 0, 00:44:44.646 "reconnect_delay_sec": 0, 00:44:44.646 "fast_io_fail_timeout_sec": 0, 00:44:44.646 "psk": "key0", 00:44:44.646 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:44.646 "hdgst": false, 00:44:44.646 "ddgst": false, 00:44:44.646 "multipath": "multipath" 00:44:44.646 } 00:44:44.646 }, 00:44:44.646 { 00:44:44.646 "method": "bdev_nvme_set_hotplug", 00:44:44.646 "params": { 00:44:44.646 "period_us": 100000, 00:44:44.646 "enable": false 00:44:44.646 } 00:44:44.646 }, 00:44:44.646 { 00:44:44.646 "method": "bdev_wait_for_examine" 00:44:44.646 } 00:44:44.646 ] 00:44:44.646 }, 00:44:44.646 { 00:44:44.646 "subsystem": "nbd", 00:44:44.646 "config": [] 00:44:44.646 } 00:44:44.646 ] 00:44:44.646 }' 00:44:44.646 03:25:59 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:44.646 03:25:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:44.904 [2024-12-14 03:25:59.810622] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:44.904 [2024-12-14 03:25:59.810669] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442745 ] 00:44:44.904 [2024-12-14 03:25:59.883973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:44.904 [2024-12-14 03:25:59.906182] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:45.162 [2024-12-14 03:26:00.062763] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:45.729 03:26:00 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:45.729 03:26:00 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:45.729 03:26:00 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:44:45.729 03:26:00 keyring_file -- keyring/file.sh@121 -- # jq length 00:44:45.729 03:26:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:45.729 03:26:00 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:44:45.729 03:26:00 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:44:45.729 03:26:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:45.729 03:26:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:45.729 03:26:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:45.729 03:26:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:45.729 03:26:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:45.986 03:26:01 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:44:45.986 03:26:01 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:44:45.986 03:26:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:45.986 03:26:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:45.986 03:26:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:45.986 03:26:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:45.986 03:26:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:46.243 03:26:01 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:44:46.243 03:26:01 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:44:46.243 03:26:01 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:44:46.243 03:26:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:44:46.501 03:26:01 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:44:46.501 03:26:01 keyring_file -- keyring/file.sh@1 -- # cleanup 00:44:46.501 03:26:01 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.iFc2npiPEr /tmp/tmp.go0dHlxbxR 00:44:46.501 03:26:01 keyring_file -- keyring/file.sh@20 -- # killprocess 442745 00:44:46.501 03:26:01 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 442745 ']' 00:44:46.501 03:26:01 keyring_file -- common/autotest_common.sh@958 -- # kill -0 442745 00:44:46.501 03:26:01 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:46.501 03:26:01 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:46.501 03:26:01 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 442745 00:44:46.501 03:26:01 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:46.501 03:26:01 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:46.501 03:26:01 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 442745' 00:44:46.501 killing process with pid 442745 00:44:46.501 03:26:01 keyring_file -- common/autotest_common.sh@973 -- # kill 442745 00:44:46.501 Received shutdown signal, test time was about 1.000000 seconds 00:44:46.501 00:44:46.501 Latency(us) 00:44:46.501 [2024-12-14T02:26:01.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:46.501 [2024-12-14T02:26:01.634Z] =================================================================================================================== 00:44:46.501 [2024-12-14T02:26:01.634Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:46.501 03:26:01 keyring_file -- common/autotest_common.sh@978 -- # wait 442745 00:44:46.759 03:26:01 keyring_file -- keyring/file.sh@21 -- # killprocess 442500 00:44:46.759 03:26:01 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 442500 ']' 00:44:46.759 03:26:01 keyring_file -- common/autotest_common.sh@958 -- # kill -0 442500 00:44:46.759 03:26:01 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:46.759 03:26:01 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:46.759 03:26:01 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 442500 00:44:46.759 03:26:01 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:46.759 03:26:01 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:46.759 03:26:01 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 442500' 00:44:46.759 killing process with pid 442500 00:44:46.759 03:26:01 keyring_file -- common/autotest_common.sh@973 -- # kill 442500 00:44:46.759 03:26:01 keyring_file -- common/autotest_common.sh@978 -- # wait 442500 00:44:47.018 00:44:47.018 real 0m11.684s 00:44:47.018 user 0m29.093s 00:44:47.018 sys 0m2.692s 00:44:47.018 03:26:02 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:47.018 03:26:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:47.018 ************************************ 00:44:47.018 END TEST keyring_file 00:44:47.018 ************************************ 00:44:47.018 03:26:02 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:44:47.018 03:26:02 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:47.018 03:26:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:47.018 03:26:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:47.018 03:26:02 -- common/autotest_common.sh@10 -- # set +x 00:44:47.018 ************************************ 00:44:47.018 START TEST keyring_linux 00:44:47.018 ************************************ 00:44:47.018 03:26:02 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:47.018 Joined session keyring: 268159103 00:44:47.278 * Looking for test storage... 00:44:47.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:47.278 03:26:02 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:47.278 03:26:02 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:44:47.278 03:26:02 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:47.278 03:26:02 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@345 -- # : 1 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@368 -- # return 0 00:44:47.278 03:26:02 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:47.278 03:26:02 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:47.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:47.278 --rc genhtml_branch_coverage=1 00:44:47.278 --rc genhtml_function_coverage=1 00:44:47.278 --rc genhtml_legend=1 00:44:47.278 --rc geninfo_all_blocks=1 00:44:47.278 --rc geninfo_unexecuted_blocks=1 00:44:47.278 00:44:47.278 ' 00:44:47.278 03:26:02 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:47.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:47.278 --rc genhtml_branch_coverage=1 00:44:47.278 --rc genhtml_function_coverage=1 00:44:47.278 --rc genhtml_legend=1 00:44:47.278 --rc geninfo_all_blocks=1 00:44:47.278 --rc geninfo_unexecuted_blocks=1 00:44:47.278 00:44:47.278 ' 00:44:47.278 03:26:02 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:47.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:47.278 --rc genhtml_branch_coverage=1 00:44:47.278 --rc genhtml_function_coverage=1 00:44:47.278 --rc genhtml_legend=1 00:44:47.278 --rc geninfo_all_blocks=1 00:44:47.278 --rc geninfo_unexecuted_blocks=1 00:44:47.278 00:44:47.278 ' 00:44:47.278 03:26:02 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:47.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:47.278 --rc genhtml_branch_coverage=1 00:44:47.278 --rc genhtml_function_coverage=1 00:44:47.278 --rc genhtml_legend=1 00:44:47.278 --rc geninfo_all_blocks=1 00:44:47.278 --rc geninfo_unexecuted_blocks=1 00:44:47.278 00:44:47.278 ' 00:44:47.278 03:26:02 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:47.278 03:26:02 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:47.278 03:26:02 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:44:47.278 03:26:02 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:47.278 03:26:02 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:47.278 03:26:02 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:47.278 03:26:02 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:47.278 03:26:02 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:47.278 03:26:02 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:47.278 03:26:02 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:47.278 03:26:02 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:47.278 03:26:02 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:47.278 03:26:02 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:47.278 03:26:02 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:47.278 03:26:02 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:47.278 03:26:02 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:47.278 03:26:02 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:47.278 03:26:02 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:47.278 03:26:02 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:47.278 03:26:02 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:47.278 03:26:02 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:47.278 03:26:02 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:47.278 03:26:02 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:47.278 03:26:02 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:47.278 03:26:02 keyring_linux -- paths/export.sh@5 -- # export PATH 00:44:47.279 03:26:02 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:47.279 03:26:02 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:44:47.279 03:26:02 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:47.279 03:26:02 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:47.279 03:26:02 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:47.279 03:26:02 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:47.279 03:26:02 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:47.279 03:26:02 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:47.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:47.279 03:26:02 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:47.279 03:26:02 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:47.279 03:26:02 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:47.279 03:26:02 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:47.279 03:26:02 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:47.279 03:26:02 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:47.279 03:26:02 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:44:47.279 03:26:02 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:44:47.279 03:26:02 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:44:47.279 03:26:02 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:44:47.279 03:26:02 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:47.279 03:26:02 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:44:47.279 03:26:02 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:47.279 03:26:02 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:47.279 03:26:02 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:44:47.279 03:26:02 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:47.279 03:26:02 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:47.279 03:26:02 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:47.279 03:26:02 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:47.279 03:26:02 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:47.279 03:26:02 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:47.279 03:26:02 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:47.279 03:26:02 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:44:47.279 03:26:02 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:44:47.279 /tmp/:spdk-test:key0 00:44:47.279 03:26:02 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:44:47.279 03:26:02 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:47.279 03:26:02 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:44:47.279 03:26:02 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:47.279 03:26:02 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:47.279 03:26:02 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:44:47.279 03:26:02 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:47.279 03:26:02 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:47.279 03:26:02 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:47.279 03:26:02 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:47.279 03:26:02 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:47.279 03:26:02 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:47.279 03:26:02 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:47.279 03:26:02 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:44:47.279 03:26:02 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:44:47.279 /tmp/:spdk-test:key1 00:44:47.279 03:26:02 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=442872 00:44:47.279 03:26:02 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 442872 00:44:47.279 03:26:02 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:47.279 03:26:02 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 442872 ']' 00:44:47.279 03:26:02 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:47.279 03:26:02 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:47.279 03:26:02 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:47.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:47.279 03:26:02 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:47.279 03:26:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:47.538 [2024-12-14 03:26:02.426336] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:47.538 [2024-12-14 03:26:02.426382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442872 ] 00:44:47.538 [2024-12-14 03:26:02.501953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:47.538 [2024-12-14 03:26:02.524273] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:47.796 03:26:02 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:47.796 03:26:02 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:47.796 03:26:02 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:44:47.796 03:26:02 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:47.796 03:26:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:47.796 [2024-12-14 03:26:02.734565] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:47.796 null0 00:44:47.796 [2024-12-14 03:26:02.766630] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:47.796 [2024-12-14 03:26:02.766895] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:47.796 03:26:02 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:47.796 03:26:02 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:44:47.796 614521383 00:44:47.796 03:26:02 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:44:47.796 706736724 00:44:47.796 03:26:02 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=442878 00:44:47.796 03:26:02 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 442878 /var/tmp/bperf.sock 00:44:47.796 03:26:02 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:44:47.796 03:26:02 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 442878 ']' 00:44:47.796 03:26:02 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:47.796 03:26:02 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:47.796 03:26:02 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:47.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:47.796 03:26:02 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:47.796 03:26:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:47.796 [2024-12-14 03:26:02.837767] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:47.796 [2024-12-14 03:26:02.837808] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442878 ] 00:44:47.796 [2024-12-14 03:26:02.907866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:48.054 [2024-12-14 03:26:02.930025] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:48.054 03:26:02 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:48.054 03:26:02 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:48.054 03:26:02 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:44:48.054 03:26:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:44:48.311 03:26:03 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:44:48.311 03:26:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:44:48.569 03:26:03 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:48.569 03:26:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:48.569 [2024-12-14 03:26:03.622160] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:48.569 nvme0n1 00:44:48.826 03:26:03 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:44:48.826 03:26:03 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:44:48.826 03:26:03 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:48.826 03:26:03 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:48.826 03:26:03 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:48.826 03:26:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:48.826 03:26:03 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:44:48.826 03:26:03 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:48.826 03:26:03 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:44:48.827 03:26:03 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:44:48.827 03:26:03 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:48.827 03:26:03 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:44:48.827 03:26:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:49.084 03:26:04 keyring_linux -- keyring/linux.sh@25 -- # sn=614521383 00:44:49.084 03:26:04 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:44:49.084 03:26:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:49.085 03:26:04 keyring_linux -- keyring/linux.sh@26 -- # [[ 614521383 == \6\1\4\5\2\1\3\8\3 ]] 00:44:49.085 03:26:04 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 614521383 00:44:49.085 03:26:04 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:44:49.085 03:26:04 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:49.085 Running I/O for 1 seconds... 00:44:50.457 21855.00 IOPS, 85.37 MiB/s 00:44:50.457 Latency(us) 00:44:50.457 [2024-12-14T02:26:05.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:50.457 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:44:50.457 nvme0n1 : 1.01 21856.04 85.38 0.00 0.00 5837.42 1942.67 7084.13 00:44:50.457 [2024-12-14T02:26:05.590Z] =================================================================================================================== 00:44:50.457 [2024-12-14T02:26:05.590Z] Total : 21856.04 85.38 0.00 0.00 5837.42 1942.67 7084.13 00:44:50.457 { 00:44:50.457 "results": [ 00:44:50.457 { 00:44:50.457 "job": "nvme0n1", 00:44:50.457 "core_mask": "0x2", 00:44:50.457 "workload": "randread", 00:44:50.457 "status": "finished", 00:44:50.457 "queue_depth": 128, 00:44:50.457 "io_size": 4096, 00:44:50.457 "runtime": 1.005809, 00:44:50.457 "iops": 21856.038273668262, 00:44:50.457 "mibps": 85.37514950651665, 00:44:50.457 "io_failed": 0, 00:44:50.457 "io_timeout": 0, 00:44:50.457 "avg_latency_us": 5837.423259791657, 00:44:50.457 "min_latency_us": 1942.6742857142858, 00:44:50.457 "max_latency_us": 7084.129523809524 00:44:50.457 } 00:44:50.457 ], 00:44:50.457 "core_count": 1 00:44:50.457 } 00:44:50.457 03:26:05 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:50.457 03:26:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:50.457 03:26:05 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:44:50.457 03:26:05 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:44:50.457 03:26:05 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:50.457 03:26:05 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:50.457 03:26:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:50.457 03:26:05 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:50.715 03:26:05 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:44:50.715 03:26:05 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:50.715 03:26:05 keyring_linux -- keyring/linux.sh@23 -- # return 00:44:50.715 03:26:05 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:50.715 03:26:05 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:44:50.715 03:26:05 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:50.715 03:26:05 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:50.715 03:26:05 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:50.715 03:26:05 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:50.715 03:26:05 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:50.715 03:26:05 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:50.715 03:26:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:50.715 [2024-12-14 03:26:05.785065] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:50.715 [2024-12-14 03:26:05.785758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75d3d0 (107): Transport endpoint is not connected 00:44:50.715 [2024-12-14 03:26:05.786755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75d3d0 (9): Bad file descriptor 00:44:50.715 [2024-12-14 03:26:05.787756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:50.715 [2024-12-14 03:26:05.787767] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:50.715 [2024-12-14 03:26:05.787774] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:50.715 [2024-12-14 03:26:05.787783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:50.715 request: 00:44:50.715 { 00:44:50.715 "name": "nvme0", 00:44:50.715 "trtype": "tcp", 00:44:50.715 "traddr": "127.0.0.1", 00:44:50.715 "adrfam": "ipv4", 00:44:50.715 "trsvcid": "4420", 00:44:50.715 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:50.715 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:50.715 "prchk_reftag": false, 00:44:50.715 "prchk_guard": false, 00:44:50.715 "hdgst": false, 00:44:50.715 "ddgst": false, 00:44:50.715 "psk": ":spdk-test:key1", 00:44:50.715 "allow_unrecognized_csi": false, 00:44:50.715 "method": "bdev_nvme_attach_controller", 00:44:50.715 "req_id": 1 00:44:50.715 } 00:44:50.715 Got JSON-RPC error response 00:44:50.715 response: 00:44:50.715 { 00:44:50.715 "code": -5, 00:44:50.715 "message": "Input/output error" 00:44:50.715 } 00:44:50.715 03:26:05 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:44:50.715 03:26:05 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:50.715 03:26:05 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:50.715 03:26:05 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:50.715 03:26:05 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:44:50.715 03:26:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:50.715 03:26:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:44:50.715 03:26:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:44:50.715 03:26:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:44:50.715 03:26:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:50.715 03:26:05 keyring_linux -- keyring/linux.sh@33 -- # sn=614521383 00:44:50.715 03:26:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 614521383 00:44:50.715 1 links removed 00:44:50.715 03:26:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:50.715 03:26:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:44:50.715 03:26:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:44:50.715 03:26:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:44:50.715 03:26:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:44:50.715 03:26:05 keyring_linux -- keyring/linux.sh@33 -- # sn=706736724 00:44:50.715 03:26:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 706736724 00:44:50.715 1 links removed 00:44:50.715 03:26:05 keyring_linux -- keyring/linux.sh@41 -- # killprocess 442878 00:44:50.715 03:26:05 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 442878 ']' 00:44:50.715 03:26:05 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 442878 00:44:50.715 03:26:05 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:50.715 03:26:05 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:50.715 03:26:05 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 442878 00:44:50.974 03:26:05 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:50.974 03:26:05 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:50.974 03:26:05 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 442878' 00:44:50.974 killing process with pid 442878 00:44:50.974 03:26:05 keyring_linux -- common/autotest_common.sh@973 -- # kill 442878 00:44:50.974 Received shutdown signal, test time was about 1.000000 seconds 00:44:50.974 00:44:50.974 Latency(us) 00:44:50.974 [2024-12-14T02:26:06.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:50.974 [2024-12-14T02:26:06.107Z] =================================================================================================================== 00:44:50.974 [2024-12-14T02:26:06.107Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:50.974 03:26:05 keyring_linux -- common/autotest_common.sh@978 -- # wait 442878 00:44:50.974 03:26:06 keyring_linux -- keyring/linux.sh@42 -- # killprocess 442872 00:44:50.974 03:26:06 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 442872 ']' 00:44:50.974 03:26:06 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 442872 00:44:50.974 03:26:06 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:50.974 03:26:06 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:50.974 03:26:06 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 442872 00:44:50.974 03:26:06 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:50.974 03:26:06 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:50.974 03:26:06 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 442872' 00:44:50.974 killing process with pid 442872 00:44:50.974 03:26:06 keyring_linux -- common/autotest_common.sh@973 -- # kill 442872 00:44:50.974 03:26:06 keyring_linux -- common/autotest_common.sh@978 -- # wait 442872 00:44:51.233 00:44:51.233 real 0m4.284s 00:44:51.233 user 0m8.107s 00:44:51.233 sys 0m1.431s 00:44:51.233 03:26:06 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:51.233 03:26:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:51.233 ************************************ 00:44:51.233 END TEST keyring_linux 00:44:51.233 ************************************ 00:44:51.491 03:26:06 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:44:51.491 03:26:06 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:44:51.491 03:26:06 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:44:51.491 03:26:06 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:44:51.491 03:26:06 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:44:51.491 03:26:06 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:44:51.491 03:26:06 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:44:51.491 03:26:06 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:44:51.491 03:26:06 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:44:51.491 03:26:06 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:44:51.491 03:26:06 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:44:51.491 03:26:06 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:44:51.491 03:26:06 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:44:51.491 03:26:06 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:44:51.491 03:26:06 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:44:51.491 03:26:06 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:44:51.491 03:26:06 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:44:51.491 03:26:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:51.491 03:26:06 -- common/autotest_common.sh@10 -- # set +x 00:44:51.491 03:26:06 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:44:51.491 03:26:06 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:44:51.491 03:26:06 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:44:51.491 03:26:06 -- common/autotest_common.sh@10 -- # set +x 00:44:56.761 INFO: APP EXITING 00:44:56.761 INFO: killing all VMs 00:44:56.761 INFO: killing vhost app 00:44:56.761 INFO: EXIT DONE 00:44:59.297 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:44:59.297 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:44:59.297 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:44:59.297 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:44:59.297 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:44:59.297 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:44:59.297 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:44:59.297 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:44:59.297 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:44:59.297 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:44:59.297 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:44:59.297 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:44:59.556 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:44:59.556 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:44:59.556 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:44:59.556 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:44:59.556 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:45:02.842 Cleaning 00:45:02.842 Removing: /var/run/dpdk/spdk0/config 00:45:02.842 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:45:02.842 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:45:02.842 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:45:02.842 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:45:02.842 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:45:02.842 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:45:02.842 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:45:02.842 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:45:02.842 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:45:02.842 Removing: /var/run/dpdk/spdk0/hugepage_info 00:45:02.842 Removing: /var/run/dpdk/spdk1/config 00:45:02.842 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:45:02.842 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:45:02.842 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:45:02.842 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:45:02.842 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:45:02.843 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:45:02.843 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:45:02.843 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:45:02.843 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:45:02.843 Removing: /var/run/dpdk/spdk1/hugepage_info 00:45:02.843 Removing: /var/run/dpdk/spdk2/config 00:45:02.843 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:45:02.843 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:45:02.843 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:45:02.843 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:45:02.843 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:45:02.843 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:45:02.843 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:45:02.843 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:45:02.843 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:45:02.843 Removing: /var/run/dpdk/spdk2/hugepage_info 00:45:02.843 Removing: /var/run/dpdk/spdk3/config 00:45:02.843 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:45:02.843 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:45:02.843 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:45:02.843 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:45:02.843 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:45:02.843 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:45:02.843 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:45:02.843 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:45:02.843 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:45:02.843 Removing: /var/run/dpdk/spdk3/hugepage_info 00:45:02.843 Removing: /var/run/dpdk/spdk4/config 00:45:02.843 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:45:02.843 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:45:02.843 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:45:02.843 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:45:02.843 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:45:02.843 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:45:02.843 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:45:02.843 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:45:02.843 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:45:02.843 Removing: /var/run/dpdk/spdk4/hugepage_info 00:45:02.843 Removing: /dev/shm/bdev_svc_trace.1 00:45:02.843 Removing: /dev/shm/nvmf_trace.0 00:45:02.843 Removing: /dev/shm/spdk_tgt_trace.pid104315 00:45:02.843 Removing: /var/run/dpdk/spdk0 00:45:02.843 Removing: /var/run/dpdk/spdk1 00:45:02.843 Removing: /var/run/dpdk/spdk2 00:45:02.843 Removing: /var/run/dpdk/spdk3 00:45:02.843 Removing: /var/run/dpdk/spdk4 00:45:02.843 Removing: /var/run/dpdk/spdk_pid102231 00:45:02.843 Removing: /var/run/dpdk/spdk_pid103258 00:45:02.843 Removing: /var/run/dpdk/spdk_pid104315 00:45:02.843 Removing: /var/run/dpdk/spdk_pid104938 00:45:02.843 Removing: /var/run/dpdk/spdk_pid105882 00:45:02.843 Removing: /var/run/dpdk/spdk_pid105963 00:45:02.843 Removing: /var/run/dpdk/spdk_pid107006 00:45:02.843 Removing: /var/run/dpdk/spdk_pid107063 00:45:02.843 Removing: /var/run/dpdk/spdk_pid107409 00:45:02.843 Removing: /var/run/dpdk/spdk_pid108896 00:45:02.843 Removing: /var/run/dpdk/spdk_pid110196 00:45:02.843 Removing: /var/run/dpdk/spdk_pid110639 00:45:02.843 Removing: /var/run/dpdk/spdk_pid110830 00:45:02.843 Removing: /var/run/dpdk/spdk_pid111020 00:45:02.843 Removing: /var/run/dpdk/spdk_pid111304 00:45:02.843 Removing: /var/run/dpdk/spdk_pid111550 00:45:02.843 Removing: /var/run/dpdk/spdk_pid111790 00:45:02.843 Removing: /var/run/dpdk/spdk_pid112066 00:45:02.843 Removing: /var/run/dpdk/spdk_pid112791 00:45:02.843 Removing: /var/run/dpdk/spdk_pid115724 00:45:02.843 Removing: /var/run/dpdk/spdk_pid115974 00:45:02.843 Removing: /var/run/dpdk/spdk_pid116222 00:45:02.843 Removing: /var/run/dpdk/spdk_pid116278 00:45:02.843 Removing: /var/run/dpdk/spdk_pid116715 00:45:02.843 Removing: /var/run/dpdk/spdk_pid116825 00:45:02.843 Removing: /var/run/dpdk/spdk_pid117199 00:45:02.843 Removing: /var/run/dpdk/spdk_pid117340 00:45:02.843 Removing: /var/run/dpdk/spdk_pid117668 00:45:02.843 Removing: /var/run/dpdk/spdk_pid117679 00:45:02.843 Removing: /var/run/dpdk/spdk_pid117929 00:45:02.843 Removing: /var/run/dpdk/spdk_pid117936 00:45:02.843 Removing: /var/run/dpdk/spdk_pid118485 00:45:02.843 Removing: /var/run/dpdk/spdk_pid118733 00:45:02.843 Removing: /var/run/dpdk/spdk_pid119025 00:45:02.843 Removing: /var/run/dpdk/spdk_pid122669 00:45:02.843 Removing: /var/run/dpdk/spdk_pid126988 00:45:02.843 Removing: /var/run/dpdk/spdk_pid137379 00:45:02.843 Removing: /var/run/dpdk/spdk_pid138056 00:45:02.843 Removing: /var/run/dpdk/spdk_pid142254 00:45:02.843 Removing: /var/run/dpdk/spdk_pid142495 00:45:02.843 Removing: /var/run/dpdk/spdk_pid146684 00:45:02.843 Removing: /var/run/dpdk/spdk_pid152454 00:45:02.843 Removing: /var/run/dpdk/spdk_pid155191 00:45:02.843 Removing: /var/run/dpdk/spdk_pid165203 00:45:02.843 Removing: /var/run/dpdk/spdk_pid174276 00:45:02.843 Removing: /var/run/dpdk/spdk_pid176456 00:45:02.843 Removing: /var/run/dpdk/spdk_pid177363 00:45:02.843 Removing: /var/run/dpdk/spdk_pid193929 00:45:02.843 Removing: /var/run/dpdk/spdk_pid197936 00:45:02.843 Removing: /var/run/dpdk/spdk_pid279979 00:45:02.843 Removing: /var/run/dpdk/spdk_pid285051 00:45:02.843 Removing: /var/run/dpdk/spdk_pid290900 00:45:02.843 Removing: /var/run/dpdk/spdk_pid297257 00:45:02.843 Removing: /var/run/dpdk/spdk_pid297259 00:45:02.843 Removing: /var/run/dpdk/spdk_pid298146 00:45:02.843 Removing: /var/run/dpdk/spdk_pid299029 00:45:02.843 Removing: /var/run/dpdk/spdk_pid300243 00:45:02.843 Removing: /var/run/dpdk/spdk_pid300908 00:45:02.843 Removing: /var/run/dpdk/spdk_pid300919 00:45:02.843 Removing: /var/run/dpdk/spdk_pid301148 00:45:02.843 Removing: /var/run/dpdk/spdk_pid301374 00:45:02.843 Removing: /var/run/dpdk/spdk_pid301377 00:45:02.843 Removing: /var/run/dpdk/spdk_pid302275 00:45:02.843 Removing: /var/run/dpdk/spdk_pid303081 00:45:02.843 Removing: /var/run/dpdk/spdk_pid303877 00:45:02.843 Removing: /var/run/dpdk/spdk_pid304538 00:45:02.843 Removing: /var/run/dpdk/spdk_pid304540 00:45:02.843 Removing: /var/run/dpdk/spdk_pid304776 00:45:02.843 Removing: /var/run/dpdk/spdk_pid305597 00:45:02.843 Removing: /var/run/dpdk/spdk_pid305734 00:45:02.843 Removing: /var/run/dpdk/spdk_pid308308 00:45:02.843 Removing: /var/run/dpdk/spdk_pid314335 00:45:02.843 Removing: /var/run/dpdk/spdk_pid316725 00:45:02.843 Removing: /var/run/dpdk/spdk_pid316860 00:45:02.843 Removing: /var/run/dpdk/spdk_pid316994 00:45:02.843 Removing: /var/run/dpdk/spdk_pid317020 00:45:02.843 Removing: /var/run/dpdk/spdk_pid317037 00:45:02.843 Removing: /var/run/dpdk/spdk_pid317060 00:45:02.843 Removing: /var/run/dpdk/spdk_pid317136 00:45:02.843 Removing: /var/run/dpdk/spdk_pid317278 00:45:02.843 Removing: /var/run/dpdk/spdk_pid317409 00:45:02.843 Removing: /var/run/dpdk/spdk_pid317483 00:45:02.843 Removing: /var/run/dpdk/spdk_pid317686 00:45:02.843 Removing: /var/run/dpdk/spdk_pid317747 00:45:02.843 Removing: /var/run/dpdk/spdk_pid317834 00:45:02.843 Removing: /var/run/dpdk/spdk_pid320168 00:45:02.843 Removing: /var/run/dpdk/spdk_pid322574 00:45:02.843 Removing: /var/run/dpdk/spdk_pid322575 00:45:02.843 Removing: /var/run/dpdk/spdk_pid322576 00:45:02.843 Removing: /var/run/dpdk/spdk_pid324849 00:45:02.843 Removing: /var/run/dpdk/spdk_pid327617 00:45:02.843 Removing: /var/run/dpdk/spdk_pid328005 00:45:02.843 Removing: /var/run/dpdk/spdk_pid340022 00:45:02.843 Removing: /var/run/dpdk/spdk_pid340812 00:45:02.843 Removing: /var/run/dpdk/spdk_pid343403 00:45:02.843 Removing: /var/run/dpdk/spdk_pid343646 00:45:02.843 Removing: /var/run/dpdk/spdk_pid343916 00:45:02.843 Removing: /var/run/dpdk/spdk_pid344183 00:45:02.843 Removing: /var/run/dpdk/spdk_pid346528 00:45:02.843 Removing: /var/run/dpdk/spdk_pid348937 00:45:02.843 Removing: /var/run/dpdk/spdk_pid351236 00:45:02.843 Removing: /var/run/dpdk/spdk_pid355846 00:45:02.843 Removing: /var/run/dpdk/spdk_pid355849 00:45:02.843 Removing: /var/run/dpdk/spdk_pid358209 00:45:02.843 Removing: /var/run/dpdk/spdk_pid358232 00:45:02.843 Removing: /var/run/dpdk/spdk_pid358247 00:45:02.843 Removing: /var/run/dpdk/spdk_pid358278 00:45:02.843 Removing: /var/run/dpdk/spdk_pid358293 00:45:02.843 Removing: /var/run/dpdk/spdk_pid358545 00:45:02.843 Removing: /var/run/dpdk/spdk_pid359058 00:45:02.843 Removing: /var/run/dpdk/spdk_pid359179 00:45:02.843 Removing: /var/run/dpdk/spdk_pid359304 00:45:02.843 Removing: /var/run/dpdk/spdk_pid359423 00:45:02.843 Removing: /var/run/dpdk/spdk_pid359554 00:45:02.843 Removing: /var/run/dpdk/spdk_pid362009 00:45:02.843 Removing: /var/run/dpdk/spdk_pid362158 00:45:02.843 Removing: /var/run/dpdk/spdk_pid362420 00:45:02.843 Removing: /var/run/dpdk/spdk_pid362617 00:45:02.843 Removing: /var/run/dpdk/spdk_pid365137 00:45:03.103 Removing: /var/run/dpdk/spdk_pid365343 00:45:03.103 Removing: /var/run/dpdk/spdk_pid367748 00:45:03.103 Removing: /var/run/dpdk/spdk_pid370329 00:45:03.103 Removing: /var/run/dpdk/spdk_pid373680 00:45:03.103 Removing: /var/run/dpdk/spdk_pid377031 00:45:03.103 Removing: /var/run/dpdk/spdk_pid377034 00:45:03.103 Removing: /var/run/dpdk/spdk_pid385957 00:45:03.103 Removing: /var/run/dpdk/spdk_pid386006 00:45:03.103 Removing: /var/run/dpdk/spdk_pid386056 00:45:03.103 Removing: /var/run/dpdk/spdk_pid386108 00:45:03.103 Removing: /var/run/dpdk/spdk_pid386219 00:45:03.103 Removing: /var/run/dpdk/spdk_pid386267 00:45:03.103 Removing: /var/run/dpdk/spdk_pid386316 00:45:03.103 Removing: /var/run/dpdk/spdk_pid386374 00:45:03.103 Removing: /var/run/dpdk/spdk_pid388690 00:45:03.103 Removing: /var/run/dpdk/spdk_pid388718 00:45:03.103 Removing: /var/run/dpdk/spdk_pid391309 00:45:03.103 Removing: /var/run/dpdk/spdk_pid391365 00:45:03.103 Removing: /var/run/dpdk/spdk_pid394243 00:45:03.103 Removing: /var/run/dpdk/spdk_pid396554 00:45:03.103 Removing: /var/run/dpdk/spdk_pid399499 00:45:03.103 Removing: /var/run/dpdk/spdk_pid399551 00:45:03.103 Removing: /var/run/dpdk/spdk_pid401882 00:45:03.103 Removing: /var/run/dpdk/spdk_pid401909 00:45:03.103 Removing: /var/run/dpdk/spdk_pid404226 00:45:03.103 Removing: /var/run/dpdk/spdk_pid406669 00:45:03.103 Removing: /var/run/dpdk/spdk_pid406920 00:45:03.103 Removing: /var/run/dpdk/spdk_pid411863 00:45:03.103 Removing: /var/run/dpdk/spdk_pid416879 00:45:03.103 Removing: /var/run/dpdk/spdk_pid417006 00:45:03.103 Removing: /var/run/dpdk/spdk_pid417085 00:45:03.103 Removing: /var/run/dpdk/spdk_pid425058 00:45:03.103 Removing: /var/run/dpdk/spdk_pid427378 00:45:03.103 Removing: /var/run/dpdk/spdk_pid427733 00:45:03.103 Removing: /var/run/dpdk/spdk_pid430376 00:45:03.103 Removing: /var/run/dpdk/spdk_pid430384 00:45:03.103 Removing: /var/run/dpdk/spdk_pid433485 00:45:03.103 Removing: /var/run/dpdk/spdk_pid434273 00:45:03.103 Removing: /var/run/dpdk/spdk_pid434551 00:45:03.103 Removing: /var/run/dpdk/spdk_pid434753 00:45:03.103 Removing: /var/run/dpdk/spdk_pid435034 00:45:03.103 Removing: /var/run/dpdk/spdk_pid435249 00:45:03.103 Removing: /var/run/dpdk/spdk_pid439482 00:45:03.103 Removing: /var/run/dpdk/spdk_pid439519 00:45:03.103 Removing: /var/run/dpdk/spdk_pid439561 00:45:03.103 Removing: /var/run/dpdk/spdk_pid440546 00:45:03.103 Removing: /var/run/dpdk/spdk_pid440581 00:45:03.103 Removing: /var/run/dpdk/spdk_pid440616 00:45:03.103 Removing: /var/run/dpdk/spdk_pid442500 00:45:03.103 Removing: /var/run/dpdk/spdk_pid442511 00:45:03.103 Removing: /var/run/dpdk/spdk_pid442745 00:45:03.103 Removing: /var/run/dpdk/spdk_pid442872 00:45:03.103 Removing: /var/run/dpdk/spdk_pid442878 00:45:03.103 Clean 00:45:03.361 03:26:18 -- common/autotest_common.sh@1453 -- # return 0 00:45:03.361 03:26:18 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:45:03.361 03:26:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:03.361 03:26:18 -- common/autotest_common.sh@10 -- # set +x 00:45:03.361 03:26:18 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:45:03.361 03:26:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:03.361 03:26:18 -- common/autotest_common.sh@10 -- # set +x 00:45:03.361 03:26:18 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:03.361 03:26:18 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:45:03.361 03:26:18 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:45:03.361 03:26:18 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:45:03.361 03:26:18 -- spdk/autotest.sh@398 -- # hostname 00:45:03.361 03:26:18 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:45:03.620 geninfo: WARNING: invalid characters removed from testname! 00:45:25.547 03:26:39 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:27.452 03:26:42 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:28.829 03:26:43 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:30.730 03:26:45 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:32.633 03:26:47 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:34.537 03:26:49 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:36.441 03:26:51 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:45:36.441 03:26:51 -- spdk/autorun.sh@1 -- $ timing_finish 00:45:36.441 03:26:51 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:45:36.441 03:26:51 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:45:36.441 03:26:51 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:45:36.441 03:26:51 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:36.441 + [[ -n 7011 ]] 00:45:36.441 + sudo kill 7011 00:45:36.452 [Pipeline] } 00:45:36.467 [Pipeline] // stage 00:45:36.472 [Pipeline] } 00:45:36.487 [Pipeline] // timeout 00:45:36.492 [Pipeline] } 00:45:36.507 [Pipeline] // catchError 00:45:36.512 [Pipeline] } 00:45:36.527 [Pipeline] // wrap 00:45:36.534 [Pipeline] } 00:45:36.547 [Pipeline] // catchError 00:45:36.557 [Pipeline] stage 00:45:36.559 [Pipeline] { (Epilogue) 00:45:36.572 [Pipeline] catchError 00:45:36.573 [Pipeline] { 00:45:36.586 [Pipeline] echo 00:45:36.588 Cleanup processes 00:45:36.594 [Pipeline] sh 00:45:36.881 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:36.881 452291 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:36.895 [Pipeline] sh 00:45:37.180 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:37.180 ++ grep -v 'sudo pgrep' 00:45:37.180 ++ awk '{print $1}' 00:45:37.180 + sudo kill -9 00:45:37.180 + true 00:45:37.192 [Pipeline] sh 00:45:37.477 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:45:47.466 [Pipeline] sh 00:45:47.753 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:45:47.753 Artifacts sizes are good 00:45:47.767 [Pipeline] archiveArtifacts 00:45:47.774 Archiving artifacts 00:45:48.236 [Pipeline] sh 00:45:48.594 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:45:48.609 [Pipeline] cleanWs 00:45:48.621 [WS-CLEANUP] Deleting project workspace... 00:45:48.621 [WS-CLEANUP] Deferred wipeout is used... 00:45:48.628 [WS-CLEANUP] done 00:45:48.630 [Pipeline] } 00:45:48.646 [Pipeline] // catchError 00:45:48.658 [Pipeline] sh 00:45:48.940 + logger -p user.info -t JENKINS-CI 00:45:48.948 [Pipeline] } 00:45:48.961 [Pipeline] // stage 00:45:48.965 [Pipeline] } 00:45:48.979 [Pipeline] // node 00:45:48.983 [Pipeline] End of Pipeline 00:45:49.057 Finished: SUCCESS